2026-03-24 01:34:12.190917 | Job console starting 2026-03-24 01:34:12.204178 | Updating git repos 2026-03-24 01:34:12.286957 | Cloning repos into workspace 2026-03-24 01:34:12.539976 | Restoring repo states 2026-03-24 01:34:12.563627 | Merging changes 2026-03-24 01:34:12.563651 | Checking out repos 2026-03-24 01:34:12.849337 | Preparing playbooks 2026-03-24 01:34:13.455144 | Running Ansible setup 2026-03-24 01:34:17.949857 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-24 01:34:18.728147 | 2026-03-24 01:34:18.728327 | PLAY [Base pre] 2026-03-24 01:34:18.745776 | 2026-03-24 01:34:18.745930 | TASK [Setup log path fact] 2026-03-24 01:34:18.776956 | orchestrator | ok 2026-03-24 01:34:18.794676 | 2026-03-24 01:34:18.794819 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-24 01:34:18.844184 | orchestrator | ok 2026-03-24 01:34:18.860843 | 2026-03-24 01:34:18.861007 | TASK [emit-job-header : Print job information] 2026-03-24 01:34:18.912087 | # Job Information 2026-03-24 01:34:18.912384 | Ansible Version: 2.16.14 2026-03-24 01:34:18.912493 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-03-24 01:34:18.912558 | Pipeline: periodic-midnight 2026-03-24 01:34:18.912599 | Executor: 521e9411259a 2026-03-24 01:34:18.912636 | Triggered by: https://github.com/osism/testbed 2026-03-24 01:34:18.912675 | Event ID: 1741def6560a4c5d8927e780ef1e0fbc 2026-03-24 01:34:18.922285 | 2026-03-24 01:34:18.922459 | LOOP [emit-job-header : Print node information] 2026-03-24 01:34:19.055068 | orchestrator | ok: 2026-03-24 01:34:19.059799 | orchestrator | # Node Information 2026-03-24 01:34:19.059919 | orchestrator | Inventory Hostname: orchestrator 2026-03-24 01:34:19.059952 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-24 01:34:19.059978 | orchestrator | Username: zuul-testbed03 2026-03-24 01:34:19.060001 | orchestrator | Distro: Debian 12.13 2026-03-24 01:34:19.060025 | orchestrator | Provider: static-testbed 2026-03-24 01:34:19.060047 | orchestrator | Region: 2026-03-24 01:34:19.060069 | orchestrator | Label: testbed-orchestrator 2026-03-24 01:34:19.060147 | orchestrator | Product Name: OpenStack Nova 2026-03-24 01:34:19.060172 | orchestrator | Interface IP: 81.163.193.140 2026-03-24 01:34:19.077525 | 2026-03-24 01:34:19.077674 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-24 01:34:19.580129 | orchestrator -> localhost | changed 2026-03-24 01:34:19.596845 | 2026-03-24 01:34:19.597007 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-24 01:34:20.686726 | orchestrator -> localhost | changed 2026-03-24 01:34:20.701967 | 2026-03-24 01:34:20.702113 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-24 01:34:21.011612 | orchestrator -> localhost | ok 2026-03-24 01:34:21.027577 | 2026-03-24 01:34:21.027759 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-24 01:34:21.065002 | orchestrator | ok 2026-03-24 01:34:21.088901 | orchestrator | included: /var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-24 01:34:21.097881 | 2026-03-24 01:34:21.097988 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-24 01:34:24.268577 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-24 01:34:24.268897 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/work/03d6a5508dd54638a48ae341d1b9631e_id_rsa 2026-03-24 01:34:24.268951 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/work/03d6a5508dd54638a48ae341d1b9631e_id_rsa.pub 2026-03-24 01:34:24.268981 | orchestrator -> localhost | The key fingerprint is: 2026-03-24 01:34:24.269010 | orchestrator -> localhost | SHA256:V9+CPbXpOQfO1hhbbctSpVR2AXKzZgzbqG3lE4E1x+I zuul-build-sshkey 2026-03-24 01:34:24.269036 | orchestrator -> localhost | The key's randomart image is: 2026-03-24 01:34:24.269075 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-24 01:34:24.269101 | orchestrator -> localhost | | ooBoo=| 2026-03-24 01:34:24.269127 | orchestrator -> localhost | | .O.*+.| 2026-03-24 01:34:24.269151 | orchestrator -> localhost | | ooOo o| 2026-03-24 01:34:24.269175 | orchestrator -> localhost | | o.=E.+=| 2026-03-24 01:34:24.269198 | orchestrator -> localhost | | S..o.oO+=| 2026-03-24 01:34:24.269230 | orchestrator -> localhost | | .. o=@o| 2026-03-24 01:34:24.269255 | orchestrator -> localhost | | .**o| 2026-03-24 01:34:24.269279 | orchestrator -> localhost | | .. o| 2026-03-24 01:34:24.269305 | orchestrator -> localhost | | | 2026-03-24 01:34:24.269329 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-24 01:34:24.269405 | orchestrator -> localhost | ok: Runtime: 0:00:02.638057 2026-03-24 01:34:24.278320 | 2026-03-24 01:34:24.278456 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-24 01:34:24.310318 | orchestrator | ok 2026-03-24 01:34:24.321519 | orchestrator | included: /var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-24 01:34:24.330938 | 2026-03-24 01:34:24.331041 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-24 01:34:24.354767 | orchestrator | skipping: Conditional result was False 2026-03-24 01:34:24.362737 | 2026-03-24 01:34:24.362878 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-24 01:34:25.223604 | orchestrator | changed 2026-03-24 01:34:25.232993 | 2026-03-24 01:34:25.233118 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-24 01:34:25.546551 | orchestrator | ok 2026-03-24 01:34:25.553187 | 2026-03-24 01:34:25.553304 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-24 01:34:25.994581 | orchestrator | ok 2026-03-24 01:34:26.001272 | 2026-03-24 01:34:26.001388 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-24 01:34:26.452905 | orchestrator | ok 2026-03-24 01:34:26.463133 | 2026-03-24 01:34:26.463264 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-24 01:34:26.497857 | orchestrator | skipping: Conditional result was False 2026-03-24 01:34:26.509108 | 2026-03-24 01:34:26.509240 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-24 01:34:26.949685 | orchestrator -> localhost | changed 2026-03-24 01:34:26.964944 | 2026-03-24 01:34:26.965070 | TASK [add-build-sshkey : Add back temp key] 2026-03-24 01:34:27.313886 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/work/03d6a5508dd54638a48ae341d1b9631e_id_rsa (zuul-build-sshkey) 2026-03-24 01:34:27.314470 | orchestrator -> localhost | ok: Runtime: 0:00:00.021378 2026-03-24 01:34:27.329674 | 2026-03-24 01:34:27.329826 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-24 01:34:27.774477 | orchestrator | ok 2026-03-24 01:34:27.782913 | 2026-03-24 01:34:27.783052 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-24 01:34:27.818339 | orchestrator | skipping: Conditional result was False 2026-03-24 01:34:27.880393 | 2026-03-24 01:34:27.880556 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-24 01:34:28.325474 | orchestrator | ok 2026-03-24 01:34:28.340852 | 2026-03-24 01:34:28.340985 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-24 01:34:28.388718 | orchestrator | ok 2026-03-24 01:34:28.399777 | 2026-03-24 01:34:28.399915 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-24 01:34:28.720128 | orchestrator -> localhost | ok 2026-03-24 01:34:28.737305 | 2026-03-24 01:34:28.737484 | TASK [validate-host : Collect information about the host] 2026-03-24 01:34:30.162700 | orchestrator | ok 2026-03-24 01:34:30.179934 | 2026-03-24 01:34:30.180068 | TASK [validate-host : Sanitize hostname] 2026-03-24 01:34:30.249009 | orchestrator | ok 2026-03-24 01:34:30.257321 | 2026-03-24 01:34:30.257554 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-24 01:34:30.852990 | orchestrator -> localhost | changed 2026-03-24 01:34:30.867792 | 2026-03-24 01:34:30.868052 | TASK [validate-host : Collect information about zuul worker] 2026-03-24 01:34:31.339216 | orchestrator | ok 2026-03-24 01:34:31.347049 | 2026-03-24 01:34:31.347205 | TASK [validate-host : Write out all zuul information for each host] 2026-03-24 01:34:31.931476 | orchestrator -> localhost | changed 2026-03-24 01:34:31.946314 | 2026-03-24 01:34:31.946462 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-24 01:34:32.247873 | orchestrator | ok 2026-03-24 01:34:32.257824 | 2026-03-24 01:34:32.257957 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-24 01:34:55.270571 | orchestrator | changed: 2026-03-24 01:34:55.270919 | orchestrator | .d..t...... src/ 2026-03-24 01:34:55.270978 | orchestrator | .d..t...... src/github.com/ 2026-03-24 01:34:55.271019 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-24 01:34:55.271055 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-24 01:34:55.271090 | orchestrator | RedHat.yml 2026-03-24 01:34:55.289032 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-24 01:34:55.289050 | orchestrator | RedHat.yml 2026-03-24 01:34:55.289102 | orchestrator | = 2.2.0"... 2026-03-24 01:35:05.238352 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-24 01:35:05.258179 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-24 01:35:05.753929 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-24 01:35:06.406666 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-24 01:35:06.478232 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-24 01:35:06.919454 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-24 01:35:07.339534 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-24 01:35:08.122770 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-24 01:35:08.122852 | orchestrator | 2026-03-24 01:35:08.122859 | orchestrator | Providers are signed by their developers. 2026-03-24 01:35:08.122864 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-24 01:35:08.122869 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-24 01:35:08.122875 | orchestrator | 2026-03-24 01:35:08.122879 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-24 01:35:08.122894 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-24 01:35:08.122898 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-24 01:35:08.122902 | orchestrator | you run "tofu init" in the future. 2026-03-24 01:35:08.122907 | orchestrator | 2026-03-24 01:35:08.122911 | orchestrator | OpenTofu has been successfully initialized! 2026-03-24 01:35:08.122915 | orchestrator | 2026-03-24 01:35:08.122919 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-24 01:35:08.122923 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-24 01:35:08.122927 | orchestrator | should now work. 2026-03-24 01:35:08.122932 | orchestrator | 2026-03-24 01:35:08.122936 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-24 01:35:08.122940 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-24 01:35:08.122944 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-24 01:35:08.306092 | orchestrator | Created and switched to workspace "ci"! 2026-03-24 01:35:08.306132 | orchestrator | 2026-03-24 01:35:08.306138 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-24 01:35:08.306143 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-24 01:35:08.306148 | orchestrator | for this configuration. 2026-03-24 01:35:08.405353 | orchestrator | ci.auto.tfvars 2026-03-24 01:35:08.557665 | orchestrator | default_custom.tf 2026-03-24 01:35:09.488770 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-24 01:35:10.158552 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-24 01:35:10.369348 | orchestrator | 2026-03-24 01:35:10.369425 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-24 01:35:10.369439 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-24 01:35:10.369450 | orchestrator | + create 2026-03-24 01:35:10.369460 | orchestrator | <= read (data resources) 2026-03-24 01:35:10.369471 | orchestrator | 2026-03-24 01:35:10.369480 | orchestrator | OpenTofu will perform the following actions: 2026-03-24 01:35:10.369499 | orchestrator | 2026-03-24 01:35:10.369509 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-24 01:35:10.369520 | orchestrator | # (config refers to values not yet known) 2026-03-24 01:35:10.369529 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-24 01:35:10.369539 | orchestrator | + checksum = (known after apply) 2026-03-24 01:35:10.369549 | orchestrator | + created_at = (known after apply) 2026-03-24 01:35:10.369595 | orchestrator | + file = (known after apply) 2026-03-24 01:35:10.369605 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.369639 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.369650 | orchestrator | + min_disk_gb = (known after apply) 2026-03-24 01:35:10.369659 | orchestrator | + min_ram_mb = (known after apply) 2026-03-24 01:35:10.369669 | orchestrator | + most_recent = true 2026-03-24 01:35:10.369679 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.369688 | orchestrator | + protected = (known after apply) 2026-03-24 01:35:10.369697 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.369709 | orchestrator | + schema = (known after apply) 2026-03-24 01:35:10.369719 | orchestrator | + size_bytes = (known after apply) 2026-03-24 01:35:10.369728 | orchestrator | + tags = (known after apply) 2026-03-24 01:35:10.369738 | orchestrator | + updated_at = (known after apply) 2026-03-24 01:35:10.369747 | orchestrator | } 2026-03-24 01:35:10.369757 | orchestrator | 2026-03-24 01:35:10.369766 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-24 01:35:10.369775 | orchestrator | # (config refers to values not yet known) 2026-03-24 01:35:10.369787 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-24 01:35:10.369804 | orchestrator | + checksum = (known after apply) 2026-03-24 01:35:10.369820 | orchestrator | + created_at = (known after apply) 2026-03-24 01:35:10.369835 | orchestrator | + file = (known after apply) 2026-03-24 01:35:10.369851 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.369867 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.369883 | orchestrator | + min_disk_gb = (known after apply) 2026-03-24 01:35:10.369898 | orchestrator | + min_ram_mb = (known after apply) 2026-03-24 01:35:10.369913 | orchestrator | + most_recent = true 2026-03-24 01:35:10.369931 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.369948 | orchestrator | + protected = (known after apply) 2026-03-24 01:35:10.369965 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.370162 | orchestrator | + schema = (known after apply) 2026-03-24 01:35:10.370189 | orchestrator | + size_bytes = (known after apply) 2026-03-24 01:35:10.370199 | orchestrator | + tags = (known after apply) 2026-03-24 01:35:10.370208 | orchestrator | + updated_at = (known after apply) 2026-03-24 01:35:10.370217 | orchestrator | } 2026-03-24 01:35:10.370235 | orchestrator | 2026-03-24 01:35:10.370245 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-24 01:35:10.370255 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-24 01:35:10.370264 | orchestrator | + content = (known after apply) 2026-03-24 01:35:10.370274 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-24 01:35:10.370283 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-24 01:35:10.370292 | orchestrator | + content_md5 = (known after apply) 2026-03-24 01:35:10.370301 | orchestrator | + content_sha1 = (known after apply) 2026-03-24 01:35:10.370310 | orchestrator | + content_sha256 = (known after apply) 2026-03-24 01:35:10.370319 | orchestrator | + content_sha512 = (known after apply) 2026-03-24 01:35:10.370329 | orchestrator | + directory_permission = "0777" 2026-03-24 01:35:10.370338 | orchestrator | + file_permission = "0644" 2026-03-24 01:35:10.370347 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-24 01:35:10.370357 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.370366 | orchestrator | } 2026-03-24 01:35:10.370375 | orchestrator | 2026-03-24 01:35:10.370384 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-24 01:35:10.370393 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-24 01:35:10.370403 | orchestrator | + content = (known after apply) 2026-03-24 01:35:10.370412 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-24 01:35:10.370421 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-24 01:35:10.370430 | orchestrator | + content_md5 = (known after apply) 2026-03-24 01:35:10.370439 | orchestrator | + content_sha1 = (known after apply) 2026-03-24 01:35:10.370448 | orchestrator | + content_sha256 = (known after apply) 2026-03-24 01:35:10.370470 | orchestrator | + content_sha512 = (known after apply) 2026-03-24 01:35:10.370480 | orchestrator | + directory_permission = "0777" 2026-03-24 01:35:10.370489 | orchestrator | + file_permission = "0644" 2026-03-24 01:35:10.370509 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-24 01:35:10.370518 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.370527 | orchestrator | } 2026-03-24 01:35:10.370537 | orchestrator | 2026-03-24 01:35:10.370546 | orchestrator | # local_file.inventory will be created 2026-03-24 01:35:10.370580 | orchestrator | + resource "local_file" "inventory" { 2026-03-24 01:35:10.370594 | orchestrator | + content = (known after apply) 2026-03-24 01:35:10.370604 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-24 01:35:10.370613 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-24 01:35:10.370622 | orchestrator | + content_md5 = (known after apply) 2026-03-24 01:35:10.370631 | orchestrator | + content_sha1 = (known after apply) 2026-03-24 01:35:10.370642 | orchestrator | + content_sha256 = (known after apply) 2026-03-24 01:35:10.370652 | orchestrator | + content_sha512 = (known after apply) 2026-03-24 01:35:10.370661 | orchestrator | + directory_permission = "0777" 2026-03-24 01:35:10.370670 | orchestrator | + file_permission = "0644" 2026-03-24 01:35:10.370679 | orchestrator | + filename = "inventory.ci" 2026-03-24 01:35:10.370688 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.370698 | orchestrator | } 2026-03-24 01:35:10.370707 | orchestrator | 2026-03-24 01:35:10.370716 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-24 01:35:10.370732 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-24 01:35:10.370748 | orchestrator | + content = (sensitive value) 2026-03-24 01:35:10.370762 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-24 01:35:10.371696 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-24 01:35:10.371713 | orchestrator | + content_md5 = (known after apply) 2026-03-24 01:35:10.371722 | orchestrator | + content_sha1 = (known after apply) 2026-03-24 01:35:10.371731 | orchestrator | + content_sha256 = (known after apply) 2026-03-24 01:35:10.371741 | orchestrator | + content_sha512 = (known after apply) 2026-03-24 01:35:10.371750 | orchestrator | + directory_permission = "0700" 2026-03-24 01:35:10.371761 | orchestrator | + file_permission = "0600" 2026-03-24 01:35:10.371771 | orchestrator | + filename = ".id_rsa.ci" 2026-03-24 01:35:10.371780 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.371790 | orchestrator | } 2026-03-24 01:35:10.371799 | orchestrator | 2026-03-24 01:35:10.371809 | orchestrator | # null_resource.node_semaphore will be created 2026-03-24 01:35:10.371818 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-24 01:35:10.371828 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.371837 | orchestrator | } 2026-03-24 01:35:10.371851 | orchestrator | 2026-03-24 01:35:10.371868 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-24 01:35:10.371884 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-24 01:35:10.371899 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.371915 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.371931 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.371946 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.371962 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.371978 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-24 01:35:10.371995 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.372488 | orchestrator | + size = 80 2026-03-24 01:35:10.372502 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.372512 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.372521 | orchestrator | } 2026-03-24 01:35:10.372531 | orchestrator | 2026-03-24 01:35:10.372541 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-24 01:35:10.372550 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-24 01:35:10.372602 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.372612 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.372622 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.372645 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.372655 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.372664 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-24 01:35:10.372673 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.372823 | orchestrator | + size = 80 2026-03-24 01:35:10.372833 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.372842 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.372852 | orchestrator | } 2026-03-24 01:35:10.372861 | orchestrator | 2026-03-24 01:35:10.372871 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-24 01:35:10.372880 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-24 01:35:10.372890 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.372910 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.372920 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.372929 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.372939 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.372948 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-24 01:35:10.372957 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.372966 | orchestrator | + size = 80 2026-03-24 01:35:10.372977 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.372987 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.372997 | orchestrator | } 2026-03-24 01:35:10.373007 | orchestrator | 2026-03-24 01:35:10.373017 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-24 01:35:10.373028 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-24 01:35:10.373038 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.373048 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.373059 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.373069 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.373079 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.373089 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-24 01:35:10.373100 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.373110 | orchestrator | + size = 80 2026-03-24 01:35:10.373126 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.373138 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.373147 | orchestrator | } 2026-03-24 01:35:10.373157 | orchestrator | 2026-03-24 01:35:10.373166 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-24 01:35:10.373175 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-24 01:35:10.373184 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.373194 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.373203 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.373212 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.373221 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.373231 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-24 01:35:10.373240 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.373249 | orchestrator | + size = 80 2026-03-24 01:35:10.373258 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.373267 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.373276 | orchestrator | } 2026-03-24 01:35:10.373286 | orchestrator | 2026-03-24 01:35:10.373295 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-24 01:35:10.373304 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-24 01:35:10.373313 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.373322 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.373332 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.373349 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.373358 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.373367 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-24 01:35:10.373377 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.373386 | orchestrator | + size = 80 2026-03-24 01:35:10.373395 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.373404 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.373413 | orchestrator | } 2026-03-24 01:35:10.373423 | orchestrator | 2026-03-24 01:35:10.373432 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-24 01:35:10.373441 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-24 01:35:10.373450 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.373459 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.373468 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.373478 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.373487 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.373496 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-24 01:35:10.373505 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.373514 | orchestrator | + size = 80 2026-03-24 01:35:10.373524 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.373533 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.373542 | orchestrator | } 2026-03-24 01:35:10.373551 | orchestrator | 2026-03-24 01:35:10.373577 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-24 01:35:10.373589 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.373599 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.373608 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.373617 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.373627 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.373636 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-24 01:35:10.373645 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.373655 | orchestrator | + size = 20 2026-03-24 01:35:10.373664 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.373674 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.373683 | orchestrator | } 2026-03-24 01:35:10.373692 | orchestrator | 2026-03-24 01:35:10.373701 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-24 01:35:10.373711 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.373720 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.373729 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.373738 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.373748 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.373757 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-24 01:35:10.373766 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.373775 | orchestrator | + size = 20 2026-03-24 01:35:10.373785 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.373794 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.373803 | orchestrator | } 2026-03-24 01:35:10.373813 | orchestrator | 2026-03-24 01:35:10.373822 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-24 01:35:10.373831 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.374047 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.374065 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.374075 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.374084 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.374093 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-24 01:35:10.374103 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.374120 | orchestrator | + size = 20 2026-03-24 01:35:10.374130 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.374139 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.374148 | orchestrator | } 2026-03-24 01:35:10.374158 | orchestrator | 2026-03-24 01:35:10.374167 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-24 01:35:10.374176 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.374185 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.374195 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.374204 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.374219 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.374229 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-24 01:35:10.374238 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.374247 | orchestrator | + size = 20 2026-03-24 01:35:10.374257 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.374266 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.374275 | orchestrator | } 2026-03-24 01:35:10.374284 | orchestrator | 2026-03-24 01:35:10.374293 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-24 01:35:10.374303 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.374312 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.374321 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.374331 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.374340 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.374349 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-24 01:35:10.374359 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.374368 | orchestrator | + size = 20 2026-03-24 01:35:10.374377 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.374387 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.374396 | orchestrator | } 2026-03-24 01:35:10.374405 | orchestrator | 2026-03-24 01:35:10.374414 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-24 01:35:10.374424 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.374433 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.374442 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.374451 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.374460 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.374470 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-24 01:35:10.374479 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.374488 | orchestrator | + size = 20 2026-03-24 01:35:10.374497 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.374507 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.374516 | orchestrator | } 2026-03-24 01:35:10.374525 | orchestrator | 2026-03-24 01:35:10.374535 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-24 01:35:10.374544 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.374553 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.374611 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.374621 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.374630 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.374640 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-24 01:35:10.374649 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.374658 | orchestrator | + size = 20 2026-03-24 01:35:10.374666 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.374674 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.374683 | orchestrator | } 2026-03-24 01:35:10.374691 | orchestrator | 2026-03-24 01:35:10.374700 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-24 01:35:10.374708 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.374722 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.374730 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.374739 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.374747 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.374756 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-24 01:35:10.374764 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.374772 | orchestrator | + size = 20 2026-03-24 01:35:10.374781 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.374789 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.374798 | orchestrator | } 2026-03-24 01:35:10.374806 | orchestrator | 2026-03-24 01:35:10.374815 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-24 01:35:10.374823 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-24 01:35:10.374831 | orchestrator | + attachment = (known after apply) 2026-03-24 01:35:10.374840 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.374848 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.374857 | orchestrator | + metadata = (known after apply) 2026-03-24 01:35:10.374865 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-24 01:35:10.374874 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.374882 | orchestrator | + size = 20 2026-03-24 01:35:10.374890 | orchestrator | + volume_retype_policy = "never" 2026-03-24 01:35:10.374899 | orchestrator | + volume_type = "ssd" 2026-03-24 01:35:10.374907 | orchestrator | } 2026-03-24 01:35:10.374916 | orchestrator | 2026-03-24 01:35:10.374924 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-24 01:35:10.374932 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-24 01:35:10.374941 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-24 01:35:10.374949 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-24 01:35:10.374958 | orchestrator | + all_metadata = (known after apply) 2026-03-24 01:35:10.375215 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.375241 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.375254 | orchestrator | + config_drive = true 2026-03-24 01:35:10.375307 | orchestrator | + created = (known after apply) 2026-03-24 01:35:10.375321 | orchestrator | + flavor_id = (known after apply) 2026-03-24 01:35:10.375339 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-24 01:35:10.375359 | orchestrator | + force_delete = false 2026-03-24 01:35:10.375378 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-24 01:35:10.375398 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.375417 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.375429 | orchestrator | + image_name = (known after apply) 2026-03-24 01:35:10.375440 | orchestrator | + key_pair = "testbed" 2026-03-24 01:35:10.375451 | orchestrator | + name = "testbed-manager" 2026-03-24 01:35:10.375463 | orchestrator | + power_state = "active" 2026-03-24 01:35:10.375474 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.375485 | orchestrator | + security_groups = (known after apply) 2026-03-24 01:35:10.375496 | orchestrator | + stop_before_destroy = false 2026-03-24 01:35:10.375508 | orchestrator | + updated = (known after apply) 2026-03-24 01:35:10.375519 | orchestrator | + user_data = (sensitive value) 2026-03-24 01:35:10.375530 | orchestrator | 2026-03-24 01:35:10.375543 | orchestrator | + block_device { 2026-03-24 01:35:10.375589 | orchestrator | + boot_index = 0 2026-03-24 01:35:10.375604 | orchestrator | + delete_on_termination = false 2026-03-24 01:35:10.375615 | orchestrator | + destination_type = "volume" 2026-03-24 01:35:10.375626 | orchestrator | + multiattach = false 2026-03-24 01:35:10.375638 | orchestrator | + source_type = "volume" 2026-03-24 01:35:10.375649 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.375680 | orchestrator | } 2026-03-24 01:35:10.375692 | orchestrator | 2026-03-24 01:35:10.375703 | orchestrator | + network { 2026-03-24 01:35:10.375715 | orchestrator | + access_network = false 2026-03-24 01:35:10.375727 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-24 01:35:10.375738 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-24 01:35:10.375749 | orchestrator | + mac = (known after apply) 2026-03-24 01:35:10.375761 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.375772 | orchestrator | + port = (known after apply) 2026-03-24 01:35:10.375783 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.375795 | orchestrator | } 2026-03-24 01:35:10.375806 | orchestrator | } 2026-03-24 01:35:10.375818 | orchestrator | 2026-03-24 01:35:10.375829 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-24 01:35:10.375841 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-24 01:35:10.375852 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-24 01:35:10.375864 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-24 01:35:10.375875 | orchestrator | + all_metadata = (known after apply) 2026-03-24 01:35:10.375886 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.375897 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.375909 | orchestrator | + config_drive = true 2026-03-24 01:35:10.375920 | orchestrator | + created = (known after apply) 2026-03-24 01:35:10.375932 | orchestrator | + flavor_id = (known after apply) 2026-03-24 01:35:10.375943 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-24 01:35:10.375954 | orchestrator | + force_delete = false 2026-03-24 01:35:10.375966 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-24 01:35:10.375979 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.375990 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.376001 | orchestrator | + image_name = (known after apply) 2026-03-24 01:35:10.376013 | orchestrator | + key_pair = "testbed" 2026-03-24 01:35:10.376024 | orchestrator | + name = "testbed-node-0" 2026-03-24 01:35:10.376035 | orchestrator | + power_state = "active" 2026-03-24 01:35:10.376047 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.376058 | orchestrator | + security_groups = (known after apply) 2026-03-24 01:35:10.376069 | orchestrator | + stop_before_destroy = false 2026-03-24 01:35:10.376081 | orchestrator | + updated = (known after apply) 2026-03-24 01:35:10.376092 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-24 01:35:10.376104 | orchestrator | 2026-03-24 01:35:10.376116 | orchestrator | + block_device { 2026-03-24 01:35:10.376127 | orchestrator | + boot_index = 0 2026-03-24 01:35:10.376139 | orchestrator | + delete_on_termination = false 2026-03-24 01:35:10.376150 | orchestrator | + destination_type = "volume" 2026-03-24 01:35:10.376161 | orchestrator | + multiattach = false 2026-03-24 01:35:10.376173 | orchestrator | + source_type = "volume" 2026-03-24 01:35:10.376184 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.376195 | orchestrator | } 2026-03-24 01:35:10.376207 | orchestrator | 2026-03-24 01:35:10.376218 | orchestrator | + network { 2026-03-24 01:35:10.376229 | orchestrator | + access_network = false 2026-03-24 01:35:10.376241 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-24 01:35:10.376252 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-24 01:35:10.376263 | orchestrator | + mac = (known after apply) 2026-03-24 01:35:10.376274 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.376286 | orchestrator | + port = (known after apply) 2026-03-24 01:35:10.376297 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.376309 | orchestrator | } 2026-03-24 01:35:10.376321 | orchestrator | } 2026-03-24 01:35:10.376332 | orchestrator | 2026-03-24 01:35:10.376343 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-24 01:35:10.376355 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-24 01:35:10.376367 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-24 01:35:10.376384 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-24 01:35:10.376396 | orchestrator | + all_metadata = (known after apply) 2026-03-24 01:35:10.376407 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.376419 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.376430 | orchestrator | + config_drive = true 2026-03-24 01:35:10.376441 | orchestrator | + created = (known after apply) 2026-03-24 01:35:10.376453 | orchestrator | + flavor_id = (known after apply) 2026-03-24 01:35:10.376464 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-24 01:35:10.376476 | orchestrator | + force_delete = false 2026-03-24 01:35:10.376487 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-24 01:35:10.376498 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.376510 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.376521 | orchestrator | + image_name = (known after apply) 2026-03-24 01:35:10.376533 | orchestrator | + key_pair = "testbed" 2026-03-24 01:35:10.376544 | orchestrator | + name = "testbed-node-1" 2026-03-24 01:35:10.376591 | orchestrator | + power_state = "active" 2026-03-24 01:35:10.376605 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.376617 | orchestrator | + security_groups = (known after apply) 2026-03-24 01:35:10.376628 | orchestrator | + stop_before_destroy = false 2026-03-24 01:35:10.376639 | orchestrator | + updated = (known after apply) 2026-03-24 01:35:10.376657 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-24 01:35:10.376669 | orchestrator | 2026-03-24 01:35:10.376681 | orchestrator | + block_device { 2026-03-24 01:35:10.376692 | orchestrator | + boot_index = 0 2026-03-24 01:35:10.376704 | orchestrator | + delete_on_termination = false 2026-03-24 01:35:10.376716 | orchestrator | + destination_type = "volume" 2026-03-24 01:35:10.376727 | orchestrator | + multiattach = false 2026-03-24 01:35:10.376738 | orchestrator | + source_type = "volume" 2026-03-24 01:35:10.376749 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.376761 | orchestrator | } 2026-03-24 01:35:10.376773 | orchestrator | 2026-03-24 01:35:10.376784 | orchestrator | + network { 2026-03-24 01:35:10.376796 | orchestrator | + access_network = false 2026-03-24 01:35:10.376807 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-24 01:35:10.376819 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-24 01:35:10.376831 | orchestrator | + mac = (known after apply) 2026-03-24 01:35:10.376842 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.376853 | orchestrator | + port = (known after apply) 2026-03-24 01:35:10.376865 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.376876 | orchestrator | } 2026-03-24 01:35:10.376888 | orchestrator | } 2026-03-24 01:35:10.376899 | orchestrator | 2026-03-24 01:35:10.376911 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-24 01:35:10.376922 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-24 01:35:10.376934 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-24 01:35:10.376945 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-24 01:35:10.376960 | orchestrator | + all_metadata = (known after apply) 2026-03-24 01:35:10.376972 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.376983 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.376995 | orchestrator | + config_drive = true 2026-03-24 01:35:10.377006 | orchestrator | + created = (known after apply) 2026-03-24 01:35:10.377017 | orchestrator | + flavor_id = (known after apply) 2026-03-24 01:35:10.377029 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-24 01:35:10.377040 | orchestrator | + force_delete = false 2026-03-24 01:35:10.377052 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-24 01:35:10.377063 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.377074 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.377095 | orchestrator | + image_name = (known after apply) 2026-03-24 01:35:10.377106 | orchestrator | + key_pair = "testbed" 2026-03-24 01:35:10.377118 | orchestrator | + name = "testbed-node-2" 2026-03-24 01:35:10.377129 | orchestrator | + power_state = "active" 2026-03-24 01:35:10.377141 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.377152 | orchestrator | + security_groups = (known after apply) 2026-03-24 01:35:10.377164 | orchestrator | + stop_before_destroy = false 2026-03-24 01:35:10.377175 | orchestrator | + updated = (known after apply) 2026-03-24 01:35:10.377187 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-24 01:35:10.377199 | orchestrator | 2026-03-24 01:35:10.377210 | orchestrator | + block_device { 2026-03-24 01:35:10.377222 | orchestrator | + boot_index = 0 2026-03-24 01:35:10.377234 | orchestrator | + delete_on_termination = false 2026-03-24 01:35:10.377245 | orchestrator | + destination_type = "volume" 2026-03-24 01:35:10.377256 | orchestrator | + multiattach = false 2026-03-24 01:35:10.377268 | orchestrator | + source_type = "volume" 2026-03-24 01:35:10.377279 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.377291 | orchestrator | } 2026-03-24 01:35:10.377302 | orchestrator | 2026-03-24 01:35:10.377314 | orchestrator | + network { 2026-03-24 01:35:10.377325 | orchestrator | + access_network = false 2026-03-24 01:35:10.377337 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-24 01:35:10.377348 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-24 01:35:10.377359 | orchestrator | + mac = (known after apply) 2026-03-24 01:35:10.377371 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.377383 | orchestrator | + port = (known after apply) 2026-03-24 01:35:10.377394 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.377405 | orchestrator | } 2026-03-24 01:35:10.377417 | orchestrator | } 2026-03-24 01:35:10.377429 | orchestrator | 2026-03-24 01:35:10.377446 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-24 01:35:10.377458 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-24 01:35:10.377470 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-24 01:35:10.377482 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-24 01:35:10.377493 | orchestrator | + all_metadata = (known after apply) 2026-03-24 01:35:10.377505 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.377517 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.377529 | orchestrator | + config_drive = true 2026-03-24 01:35:10.377540 | orchestrator | + created = (known after apply) 2026-03-24 01:35:10.377552 | orchestrator | + flavor_id = (known after apply) 2026-03-24 01:35:10.377592 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-24 01:35:10.377613 | orchestrator | + force_delete = false 2026-03-24 01:35:10.377632 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-24 01:35:10.377644 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.377656 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.377667 | orchestrator | + image_name = (known after apply) 2026-03-24 01:35:10.377679 | orchestrator | + key_pair = "testbed" 2026-03-24 01:35:10.377691 | orchestrator | + name = "testbed-node-3" 2026-03-24 01:35:10.377702 | orchestrator | + power_state = "active" 2026-03-24 01:35:10.377713 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.377725 | orchestrator | + security_groups = (known after apply) 2026-03-24 01:35:10.377737 | orchestrator | + stop_before_destroy = false 2026-03-24 01:35:10.377748 | orchestrator | + updated = (known after apply) 2026-03-24 01:35:10.377760 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-24 01:35:10.377772 | orchestrator | 2026-03-24 01:35:10.377783 | orchestrator | + block_device { 2026-03-24 01:35:10.377795 | orchestrator | + boot_index = 0 2026-03-24 01:35:10.377806 | orchestrator | + delete_on_termination = false 2026-03-24 01:35:10.377818 | orchestrator | + destination_type = "volume" 2026-03-24 01:35:10.377870 | orchestrator | + multiattach = false 2026-03-24 01:35:10.377884 | orchestrator | + source_type = "volume" 2026-03-24 01:35:10.377896 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.377908 | orchestrator | } 2026-03-24 01:35:10.377919 | orchestrator | 2026-03-24 01:35:10.377931 | orchestrator | + network { 2026-03-24 01:35:10.377942 | orchestrator | + access_network = false 2026-03-24 01:35:10.377954 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-24 01:35:10.378058 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-24 01:35:10.378078 | orchestrator | + mac = (known after apply) 2026-03-24 01:35:10.378089 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.378101 | orchestrator | + port = (known after apply) 2026-03-24 01:35:10.378113 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.378124 | orchestrator | } 2026-03-24 01:35:10.378136 | orchestrator | } 2026-03-24 01:35:10.378148 | orchestrator | 2026-03-24 01:35:10.378159 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-24 01:35:10.378171 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-24 01:35:10.378183 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-24 01:35:10.378194 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-24 01:35:10.378205 | orchestrator | + all_metadata = (known after apply) 2026-03-24 01:35:10.378216 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.378228 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.378239 | orchestrator | + config_drive = true 2026-03-24 01:35:10.378251 | orchestrator | + created = (known after apply) 2026-03-24 01:35:10.378262 | orchestrator | + flavor_id = (known after apply) 2026-03-24 01:35:10.378273 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-24 01:35:10.378284 | orchestrator | + force_delete = false 2026-03-24 01:35:10.378295 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-24 01:35:10.378307 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.378318 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.378329 | orchestrator | + image_name = (known after apply) 2026-03-24 01:35:10.378340 | orchestrator | + key_pair = "testbed" 2026-03-24 01:35:10.378352 | orchestrator | + name = "testbed-node-4" 2026-03-24 01:35:10.378364 | orchestrator | + power_state = "active" 2026-03-24 01:35:10.378375 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.378386 | orchestrator | + security_groups = (known after apply) 2026-03-24 01:35:10.378398 | orchestrator | + stop_before_destroy = false 2026-03-24 01:35:10.378409 | orchestrator | + updated = (known after apply) 2026-03-24 01:35:10.378421 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-24 01:35:10.378433 | orchestrator | 2026-03-24 01:35:10.378444 | orchestrator | + block_device { 2026-03-24 01:35:10.378456 | orchestrator | + boot_index = 0 2026-03-24 01:35:10.378467 | orchestrator | + delete_on_termination = false 2026-03-24 01:35:10.378478 | orchestrator | + destination_type = "volume" 2026-03-24 01:35:10.378490 | orchestrator | + multiattach = false 2026-03-24 01:35:10.378501 | orchestrator | + source_type = "volume" 2026-03-24 01:35:10.378512 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.378524 | orchestrator | } 2026-03-24 01:35:10.378535 | orchestrator | 2026-03-24 01:35:10.378547 | orchestrator | + network { 2026-03-24 01:35:10.378640 | orchestrator | + access_network = false 2026-03-24 01:35:10.378657 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-24 01:35:10.378668 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-24 01:35:10.378680 | orchestrator | + mac = (known after apply) 2026-03-24 01:35:10.378691 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.378702 | orchestrator | + port = (known after apply) 2026-03-24 01:35:10.378714 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.378725 | orchestrator | } 2026-03-24 01:35:10.378737 | orchestrator | } 2026-03-24 01:35:10.378759 | orchestrator | 2026-03-24 01:35:10.378771 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-24 01:35:10.378783 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-24 01:35:10.378794 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-24 01:35:10.378806 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-24 01:35:10.378817 | orchestrator | + all_metadata = (known after apply) 2026-03-24 01:35:10.378829 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.378840 | orchestrator | + availability_zone = "nova" 2026-03-24 01:35:10.378852 | orchestrator | + config_drive = true 2026-03-24 01:35:10.378863 | orchestrator | + created = (known after apply) 2026-03-24 01:35:10.378875 | orchestrator | + flavor_id = (known after apply) 2026-03-24 01:35:10.378886 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-24 01:35:10.378898 | orchestrator | + force_delete = false 2026-03-24 01:35:10.378909 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-24 01:35:10.378920 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.378932 | orchestrator | + image_id = (known after apply) 2026-03-24 01:35:10.378943 | orchestrator | + image_name = (known after apply) 2026-03-24 01:35:10.378955 | orchestrator | + key_pair = "testbed" 2026-03-24 01:35:10.378966 | orchestrator | + name = "testbed-node-5" 2026-03-24 01:35:10.378978 | orchestrator | + power_state = "active" 2026-03-24 01:35:10.378989 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.379000 | orchestrator | + security_groups = (known after apply) 2026-03-24 01:35:10.379012 | orchestrator | + stop_before_destroy = false 2026-03-24 01:35:10.379023 | orchestrator | + updated = (known after apply) 2026-03-24 01:35:10.379035 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-24 01:35:10.379046 | orchestrator | 2026-03-24 01:35:10.379058 | orchestrator | + block_device { 2026-03-24 01:35:10.379069 | orchestrator | + boot_index = 0 2026-03-24 01:35:10.379080 | orchestrator | + delete_on_termination = false 2026-03-24 01:35:10.379092 | orchestrator | + destination_type = "volume" 2026-03-24 01:35:10.379103 | orchestrator | + multiattach = false 2026-03-24 01:35:10.379114 | orchestrator | + source_type = "volume" 2026-03-24 01:35:10.379126 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.379137 | orchestrator | } 2026-03-24 01:35:10.379148 | orchestrator | 2026-03-24 01:35:10.379160 | orchestrator | + network { 2026-03-24 01:35:10.379171 | orchestrator | + access_network = false 2026-03-24 01:35:10.379183 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-24 01:35:10.379194 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-24 01:35:10.379205 | orchestrator | + mac = (known after apply) 2026-03-24 01:35:10.379217 | orchestrator | + name = (known after apply) 2026-03-24 01:35:10.379241 | orchestrator | + port = (known after apply) 2026-03-24 01:35:10.379253 | orchestrator | + uuid = (known after apply) 2026-03-24 01:35:10.379264 | orchestrator | } 2026-03-24 01:35:10.379276 | orchestrator | } 2026-03-24 01:35:10.379287 | orchestrator | 2026-03-24 01:35:10.379299 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-24 01:35:10.379311 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-24 01:35:10.379322 | orchestrator | + fingerprint = (known after apply) 2026-03-24 01:35:10.379334 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.379345 | orchestrator | + name = "testbed" 2026-03-24 01:35:10.379356 | orchestrator | + private_key = (sensitive value) 2026-03-24 01:35:10.379367 | orchestrator | + public_key = (known after apply) 2026-03-24 01:35:10.379379 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.379390 | orchestrator | + user_id = (known after apply) 2026-03-24 01:35:10.379401 | orchestrator | } 2026-03-24 01:35:10.379413 | orchestrator | 2026-03-24 01:35:10.379424 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-24 01:35:10.379436 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.379455 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.379466 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.379478 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.379489 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.379508 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.379520 | orchestrator | } 2026-03-24 01:35:10.379531 | orchestrator | 2026-03-24 01:35:10.379543 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-24 01:35:10.379575 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.379589 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.379601 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.379612 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.379623 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.379635 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.379646 | orchestrator | } 2026-03-24 01:35:10.379658 | orchestrator | 2026-03-24 01:35:10.379670 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-24 01:35:10.379682 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.379693 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.379705 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.379716 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.379727 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.379739 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.379750 | orchestrator | } 2026-03-24 01:35:10.379762 | orchestrator | 2026-03-24 01:35:10.379773 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-24 01:35:10.379785 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.379796 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.379808 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.379819 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.379830 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.379842 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.379853 | orchestrator | } 2026-03-24 01:35:10.379865 | orchestrator | 2026-03-24 01:35:10.379877 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-24 01:35:10.379889 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.379900 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.379912 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.379923 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.379934 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.379946 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.379957 | orchestrator | } 2026-03-24 01:35:10.379968 | orchestrator | 2026-03-24 01:35:10.379980 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-24 01:35:10.379992 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.380003 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.380015 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.380026 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.380037 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.380049 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.380061 | orchestrator | } 2026-03-24 01:35:10.380072 | orchestrator | 2026-03-24 01:35:10.380083 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-24 01:35:10.380095 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.380107 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.380118 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.380129 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.380141 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.380160 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.380172 | orchestrator | } 2026-03-24 01:35:10.380186 | orchestrator | 2026-03-24 01:35:10.380206 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-24 01:35:10.380226 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.380246 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.380266 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.380280 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.380292 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.380303 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.380315 | orchestrator | } 2026-03-24 01:35:10.380326 | orchestrator | 2026-03-24 01:35:10.380338 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-24 01:35:10.380349 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-24 01:35:10.380360 | orchestrator | + device = (known after apply) 2026-03-24 01:35:10.380372 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.380383 | orchestrator | + instance_id = (known after apply) 2026-03-24 01:35:10.380394 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.380405 | orchestrator | + volume_id = (known after apply) 2026-03-24 01:35:10.380417 | orchestrator | } 2026-03-24 01:35:10.380428 | orchestrator | 2026-03-24 01:35:10.380439 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-24 01:35:10.380452 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-24 01:35:10.380463 | orchestrator | + fixed_ip = (known after apply) 2026-03-24 01:35:10.380484 | orchestrator | + floating_ip = (known after apply) 2026-03-24 01:35:10.380496 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.380516 | orchestrator | + port_id = (known after apply) 2026-03-24 01:35:10.380534 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.380553 | orchestrator | } 2026-03-24 01:35:10.380590 | orchestrator | 2026-03-24 01:35:10.380608 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-24 01:35:10.380625 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-24 01:35:10.380642 | orchestrator | + address = (known after apply) 2026-03-24 01:35:10.380661 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.380688 | orchestrator | + dns_domain = (known after apply) 2026-03-24 01:35:10.380708 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.380728 | orchestrator | + fixed_ip = (known after apply) 2026-03-24 01:35:10.380747 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.380759 | orchestrator | + pool = "public" 2026-03-24 01:35:10.380771 | orchestrator | + port_id = (known after apply) 2026-03-24 01:35:10.380783 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.380795 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.380806 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.380818 | orchestrator | } 2026-03-24 01:35:10.380829 | orchestrator | 2026-03-24 01:35:10.380840 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-24 01:35:10.380852 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-24 01:35:10.380864 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.380875 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.380886 | orchestrator | + availability_zone_hints = [ 2026-03-24 01:35:10.380898 | orchestrator | + "nova", 2026-03-24 01:35:10.380910 | orchestrator | ] 2026-03-24 01:35:10.380921 | orchestrator | + dns_domain = (known after apply) 2026-03-24 01:35:10.380932 | orchestrator | + external = (known after apply) 2026-03-24 01:35:10.380944 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.380955 | orchestrator | + mtu = (known after apply) 2026-03-24 01:35:10.380966 | orchestrator | + name = "net-testbed-management" 2026-03-24 01:35:10.380978 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.380999 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.381011 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.381023 | orchestrator | + shared = (known after apply) 2026-03-24 01:35:10.381034 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.381046 | orchestrator | + transparent_vlan = (known after apply) 2026-03-24 01:35:10.381057 | orchestrator | 2026-03-24 01:35:10.381068 | orchestrator | + segments (known after apply) 2026-03-24 01:35:10.381080 | orchestrator | } 2026-03-24 01:35:10.381091 | orchestrator | 2026-03-24 01:35:10.381103 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-24 01:35:10.381114 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-24 01:35:10.381126 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.381137 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-24 01:35:10.381148 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-24 01:35:10.381160 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.381171 | orchestrator | + device_id = (known after apply) 2026-03-24 01:35:10.381182 | orchestrator | + device_owner = (known after apply) 2026-03-24 01:35:10.381194 | orchestrator | + dns_assignment = (known after apply) 2026-03-24 01:35:10.381205 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.381217 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.381228 | orchestrator | + mac_address = (known after apply) 2026-03-24 01:35:10.381239 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.381251 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.381262 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.381273 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.381284 | orchestrator | + security_group_ids = (known after apply) 2026-03-24 01:35:10.381296 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.381307 | orchestrator | 2026-03-24 01:35:10.381318 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.381330 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-24 01:35:10.381341 | orchestrator | } 2026-03-24 01:35:10.381353 | orchestrator | 2026-03-24 01:35:10.381364 | orchestrator | + binding (known after apply) 2026-03-24 01:35:10.381375 | orchestrator | 2026-03-24 01:35:10.381387 | orchestrator | + fixed_ip { 2026-03-24 01:35:10.381398 | orchestrator | + ip_address = "192.168.16.5" 2026-03-24 01:35:10.381410 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.381421 | orchestrator | } 2026-03-24 01:35:10.381432 | orchestrator | } 2026-03-24 01:35:10.381444 | orchestrator | 2026-03-24 01:35:10.381455 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-24 01:35:10.381466 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-24 01:35:10.381478 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.381489 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-24 01:35:10.381501 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-24 01:35:10.381512 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.381523 | orchestrator | + device_id = (known after apply) 2026-03-24 01:35:10.381535 | orchestrator | + device_owner = (known after apply) 2026-03-24 01:35:10.381546 | orchestrator | + dns_assignment = (known after apply) 2026-03-24 01:35:10.381579 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.381592 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.381603 | orchestrator | + mac_address = (known after apply) 2026-03-24 01:35:10.381615 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.381626 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.381637 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.381649 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.381667 | orchestrator | + security_group_ids = (known after apply) 2026-03-24 01:35:10.381678 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.381690 | orchestrator | 2026-03-24 01:35:10.381701 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.381713 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-24 01:35:10.381725 | orchestrator | } 2026-03-24 01:35:10.381736 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.381757 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-24 01:35:10.381769 | orchestrator | } 2026-03-24 01:35:10.381781 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.381793 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-24 01:35:10.381804 | orchestrator | } 2026-03-24 01:35:10.381816 | orchestrator | 2026-03-24 01:35:10.381827 | orchestrator | + binding (known after apply) 2026-03-24 01:35:10.381838 | orchestrator | 2026-03-24 01:35:10.381850 | orchestrator | + fixed_ip { 2026-03-24 01:35:10.381861 | orchestrator | + ip_address = "192.168.16.10" 2026-03-24 01:35:10.381872 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.381884 | orchestrator | } 2026-03-24 01:35:10.381895 | orchestrator | } 2026-03-24 01:35:10.381907 | orchestrator | 2026-03-24 01:35:10.381918 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-24 01:35:10.381930 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-24 01:35:10.381953 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.381965 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-24 01:35:10.381976 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-24 01:35:10.381988 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.381999 | orchestrator | + device_id = (known after apply) 2026-03-24 01:35:10.382010 | orchestrator | + device_owner = (known after apply) 2026-03-24 01:35:10.382059 | orchestrator | + dns_assignment = (known after apply) 2026-03-24 01:35:10.382071 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.382082 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.382093 | orchestrator | + mac_address = (known after apply) 2026-03-24 01:35:10.382105 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.382117 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.382128 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.382139 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.382151 | orchestrator | + security_group_ids = (known after apply) 2026-03-24 01:35:10.382162 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.382174 | orchestrator | 2026-03-24 01:35:10.382185 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.382197 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-24 01:35:10.382208 | orchestrator | } 2026-03-24 01:35:10.382220 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.382232 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-24 01:35:10.382243 | orchestrator | } 2026-03-24 01:35:10.382254 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.382266 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-24 01:35:10.382277 | orchestrator | } 2026-03-24 01:35:10.382289 | orchestrator | 2026-03-24 01:35:10.382300 | orchestrator | + binding (known after apply) 2026-03-24 01:35:10.382312 | orchestrator | 2026-03-24 01:35:10.382324 | orchestrator | + fixed_ip { 2026-03-24 01:35:10.382335 | orchestrator | + ip_address = "192.168.16.11" 2026-03-24 01:35:10.382347 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.382358 | orchestrator | } 2026-03-24 01:35:10.382370 | orchestrator | } 2026-03-24 01:35:10.382381 | orchestrator | 2026-03-24 01:35:10.382392 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-24 01:35:10.382404 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-24 01:35:10.382415 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.382427 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-24 01:35:10.382439 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-24 01:35:10.382450 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.382468 | orchestrator | + device_id = (known after apply) 2026-03-24 01:35:10.382480 | orchestrator | + device_owner = (known after apply) 2026-03-24 01:35:10.382491 | orchestrator | + dns_assignment = (known after apply) 2026-03-24 01:35:10.382503 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.382514 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.382526 | orchestrator | + mac_address = (known after apply) 2026-03-24 01:35:10.382537 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.382548 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.382595 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.382616 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.382635 | orchestrator | + security_group_ids = (known after apply) 2026-03-24 01:35:10.382647 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.382659 | orchestrator | 2026-03-24 01:35:10.382670 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.382682 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-24 01:35:10.382693 | orchestrator | } 2026-03-24 01:35:10.382705 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.382716 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-24 01:35:10.382727 | orchestrator | } 2026-03-24 01:35:10.382739 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.382750 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-24 01:35:10.382761 | orchestrator | } 2026-03-24 01:35:10.382773 | orchestrator | 2026-03-24 01:35:10.382784 | orchestrator | + binding (known after apply) 2026-03-24 01:35:10.382796 | orchestrator | 2026-03-24 01:35:10.382807 | orchestrator | + fixed_ip { 2026-03-24 01:35:10.382819 | orchestrator | + ip_address = "192.168.16.12" 2026-03-24 01:35:10.382830 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.382842 | orchestrator | } 2026-03-24 01:35:10.382853 | orchestrator | } 2026-03-24 01:35:10.382864 | orchestrator | 2026-03-24 01:35:10.382876 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-24 01:35:10.382887 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-24 01:35:10.382899 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.382910 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-24 01:35:10.382922 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-24 01:35:10.382934 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.382945 | orchestrator | + device_id = (known after apply) 2026-03-24 01:35:10.382956 | orchestrator | + device_owner = (known after apply) 2026-03-24 01:35:10.382968 | orchestrator | + dns_assignment = (known after apply) 2026-03-24 01:35:10.382979 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.382990 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.383001 | orchestrator | + mac_address = (known after apply) 2026-03-24 01:35:10.383013 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.383024 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.383036 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.383047 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.383059 | orchestrator | + security_group_ids = (known after apply) 2026-03-24 01:35:10.383070 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.383082 | orchestrator | 2026-03-24 01:35:10.383103 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.383115 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-24 01:35:10.383126 | orchestrator | } 2026-03-24 01:35:10.383138 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.383149 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-24 01:35:10.383160 | orchestrator | } 2026-03-24 01:35:10.383172 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.383184 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-24 01:35:10.383195 | orchestrator | } 2026-03-24 01:35:10.383207 | orchestrator | 2026-03-24 01:35:10.383231 | orchestrator | + binding (known after apply) 2026-03-24 01:35:10.383251 | orchestrator | 2026-03-24 01:35:10.383272 | orchestrator | + fixed_ip { 2026-03-24 01:35:10.383292 | orchestrator | + ip_address = "192.168.16.13" 2026-03-24 01:35:10.383310 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.383322 | orchestrator | } 2026-03-24 01:35:10.383334 | orchestrator | } 2026-03-24 01:35:10.383345 | orchestrator | 2026-03-24 01:35:10.383357 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-24 01:35:10.383369 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-24 01:35:10.383380 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.383392 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-24 01:35:10.383403 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-24 01:35:10.383415 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.383426 | orchestrator | + device_id = (known after apply) 2026-03-24 01:35:10.383437 | orchestrator | + device_owner = (known after apply) 2026-03-24 01:35:10.383448 | orchestrator | + dns_assignment = (known after apply) 2026-03-24 01:35:10.383460 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.383478 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.383489 | orchestrator | + mac_address = (known after apply) 2026-03-24 01:35:10.383506 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.383525 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.383544 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.383587 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.383607 | orchestrator | + security_group_ids = (known after apply) 2026-03-24 01:35:10.383624 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.383638 | orchestrator | 2026-03-24 01:35:10.383650 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.383667 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-24 01:35:10.383679 | orchestrator | } 2026-03-24 01:35:10.383690 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.383702 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-24 01:35:10.383713 | orchestrator | } 2026-03-24 01:35:10.383725 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.383736 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-24 01:35:10.383748 | orchestrator | } 2026-03-24 01:35:10.383760 | orchestrator | 2026-03-24 01:35:10.383771 | orchestrator | + binding (known after apply) 2026-03-24 01:35:10.383782 | orchestrator | 2026-03-24 01:35:10.383794 | orchestrator | + fixed_ip { 2026-03-24 01:35:10.383806 | orchestrator | + ip_address = "192.168.16.14" 2026-03-24 01:35:10.383817 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.383828 | orchestrator | } 2026-03-24 01:35:10.383840 | orchestrator | } 2026-03-24 01:35:10.383851 | orchestrator | 2026-03-24 01:35:10.383863 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-24 01:35:10.383874 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-24 01:35:10.383886 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.383897 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-24 01:35:10.383909 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-24 01:35:10.383921 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.383932 | orchestrator | + device_id = (known after apply) 2026-03-24 01:35:10.383944 | orchestrator | + device_owner = (known after apply) 2026-03-24 01:35:10.383955 | orchestrator | + dns_assignment = (known after apply) 2026-03-24 01:35:10.383967 | orchestrator | + dns_name = (known after apply) 2026-03-24 01:35:10.383978 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.383989 | orchestrator | + mac_address = (known after apply) 2026-03-24 01:35:10.384001 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.384012 | orchestrator | + port_security_enabled = (known after apply) 2026-03-24 01:35:10.384023 | orchestrator | + qos_policy_id = (known after apply) 2026-03-24 01:35:10.384043 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.384055 | orchestrator | + security_group_ids = (known after apply) 2026-03-24 01:35:10.384067 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.384078 | orchestrator | 2026-03-24 01:35:10.384090 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.384101 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-24 01:35:10.384112 | orchestrator | } 2026-03-24 01:35:10.384124 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.384135 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-24 01:35:10.384146 | orchestrator | } 2026-03-24 01:35:10.384158 | orchestrator | + allowed_address_pairs { 2026-03-24 01:35:10.384170 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-24 01:35:10.384181 | orchestrator | } 2026-03-24 01:35:10.384192 | orchestrator | 2026-03-24 01:35:10.384204 | orchestrator | + binding (known after apply) 2026-03-24 01:35:10.384215 | orchestrator | 2026-03-24 01:35:10.384226 | orchestrator | + fixed_ip { 2026-03-24 01:35:10.384238 | orchestrator | + ip_address = "192.168.16.15" 2026-03-24 01:35:10.384249 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.384260 | orchestrator | } 2026-03-24 01:35:10.384271 | orchestrator | } 2026-03-24 01:35:10.384283 | orchestrator | 2026-03-24 01:35:10.384294 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-24 01:35:10.384306 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-24 01:35:10.384318 | orchestrator | + force_destroy = false 2026-03-24 01:35:10.384329 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.384341 | orchestrator | + port_id = (known after apply) 2026-03-24 01:35:10.384352 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.384364 | orchestrator | + router_id = (known after apply) 2026-03-24 01:35:10.384375 | orchestrator | + subnet_id = (known after apply) 2026-03-24 01:35:10.384386 | orchestrator | } 2026-03-24 01:35:10.384398 | orchestrator | 2026-03-24 01:35:10.384409 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-24 01:35:10.384421 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-24 01:35:10.384432 | orchestrator | + admin_state_up = (known after apply) 2026-03-24 01:35:10.384444 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.384455 | orchestrator | + availability_zone_hints = [ 2026-03-24 01:35:10.384467 | orchestrator | + "nova", 2026-03-24 01:35:10.384479 | orchestrator | ] 2026-03-24 01:35:10.384490 | orchestrator | + distributed = (known after apply) 2026-03-24 01:35:10.384502 | orchestrator | + enable_snat = (known after apply) 2026-03-24 01:35:10.384514 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-24 01:35:10.384536 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-24 01:35:10.384548 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.384581 | orchestrator | + name = "testbed" 2026-03-24 01:35:10.384603 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.384622 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.384638 | orchestrator | 2026-03-24 01:35:10.384650 | orchestrator | + external_fixed_ip (known after apply) 2026-03-24 01:35:10.384661 | orchestrator | } 2026-03-24 01:35:10.384673 | orchestrator | 2026-03-24 01:35:10.384685 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-24 01:35:10.384696 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-24 01:35:10.384707 | orchestrator | + description = "ssh" 2026-03-24 01:35:10.384719 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.384730 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.384741 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.384753 | orchestrator | + port_range_max = 22 2026-03-24 01:35:10.384764 | orchestrator | + port_range_min = 22 2026-03-24 01:35:10.384775 | orchestrator | + protocol = "tcp" 2026-03-24 01:35:10.384786 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.384805 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.384817 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.384828 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-24 01:35:10.384840 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.384852 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.384863 | orchestrator | } 2026-03-24 01:35:10.384875 | orchestrator | 2026-03-24 01:35:10.384886 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-24 01:35:10.384898 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-24 01:35:10.384909 | orchestrator | + description = "wireguard" 2026-03-24 01:35:10.384921 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.384932 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.384943 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.384954 | orchestrator | + port_range_max = 51820 2026-03-24 01:35:10.384966 | orchestrator | + port_range_min = 51820 2026-03-24 01:35:10.384977 | orchestrator | + protocol = "udp" 2026-03-24 01:35:10.385009 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.385021 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.385033 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.385044 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-24 01:35:10.385056 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.385067 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.385079 | orchestrator | } 2026-03-24 01:35:10.385090 | orchestrator | 2026-03-24 01:35:10.385101 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-24 01:35:10.385113 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-24 01:35:10.385131 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.385142 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.385153 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.385165 | orchestrator | + protocol = "tcp" 2026-03-24 01:35:10.385176 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.385187 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.385199 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.385210 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-24 01:35:10.385221 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.385233 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.385244 | orchestrator | } 2026-03-24 01:35:10.385255 | orchestrator | 2026-03-24 01:35:10.385267 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-24 01:35:10.385279 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-24 01:35:10.385290 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.385302 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.385313 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.385324 | orchestrator | + protocol = "udp" 2026-03-24 01:35:10.385336 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.385347 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.385358 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.385370 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-24 01:35:10.385381 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.385392 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.385404 | orchestrator | } 2026-03-24 01:35:10.385415 | orchestrator | 2026-03-24 01:35:10.385427 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-24 01:35:10.385444 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-24 01:35:10.385456 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.385467 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.385478 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.385490 | orchestrator | + protocol = "icmp" 2026-03-24 01:35:10.385501 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.385512 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.385524 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.385535 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-24 01:35:10.385547 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.385611 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.385632 | orchestrator | } 2026-03-24 01:35:10.385651 | orchestrator | 2026-03-24 01:35:10.385670 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-24 01:35:10.385700 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-24 01:35:10.385721 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.385739 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.385759 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.385779 | orchestrator | + protocol = "tcp" 2026-03-24 01:35:10.385800 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.385819 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.385839 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.385859 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-24 01:35:10.385882 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.385903 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.385919 | orchestrator | } 2026-03-24 01:35:10.385931 | orchestrator | 2026-03-24 01:35:10.385943 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-24 01:35:10.385954 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-24 01:35:10.385966 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.385977 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.385989 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.386001 | orchestrator | + protocol = "udp" 2026-03-24 01:35:10.386012 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.386054 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.386065 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.386076 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-24 01:35:10.386088 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.386099 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.386111 | orchestrator | } 2026-03-24 01:35:10.386122 | orchestrator | 2026-03-24 01:35:10.386132 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-24 01:35:10.386142 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-24 01:35:10.386153 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.386163 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.386173 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.386183 | orchestrator | + protocol = "icmp" 2026-03-24 01:35:10.386193 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.386203 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.386214 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.386224 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-24 01:35:10.386234 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.386244 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.386264 | orchestrator | } 2026-03-24 01:35:10.386275 | orchestrator | 2026-03-24 01:35:10.386285 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-24 01:35:10.386296 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-24 01:35:10.386306 | orchestrator | + description = "vrrp" 2026-03-24 01:35:10.386316 | orchestrator | + direction = "ingress" 2026-03-24 01:35:10.386326 | orchestrator | + ethertype = "IPv4" 2026-03-24 01:35:10.386336 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.386346 | orchestrator | + protocol = "112" 2026-03-24 01:35:10.386356 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.386367 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-24 01:35:10.386377 | orchestrator | + remote_group_id = (known after apply) 2026-03-24 01:35:10.386387 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-24 01:35:10.386397 | orchestrator | + security_group_id = (known after apply) 2026-03-24 01:35:10.386407 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.386417 | orchestrator | } 2026-03-24 01:35:10.386428 | orchestrator | 2026-03-24 01:35:10.386438 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-24 01:35:10.386449 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-24 01:35:10.386459 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.386470 | orchestrator | + description = "management security group" 2026-03-24 01:35:10.386480 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.386490 | orchestrator | + name = "testbed-management" 2026-03-24 01:35:10.386501 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.386511 | orchestrator | + stateful = (known after apply) 2026-03-24 01:35:10.386521 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.386531 | orchestrator | } 2026-03-24 01:35:10.386541 | orchestrator | 2026-03-24 01:35:10.386552 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-24 01:35:10.386588 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-24 01:35:10.386599 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.386609 | orchestrator | + description = "node security group" 2026-03-24 01:35:10.386619 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.386629 | orchestrator | + name = "testbed-node" 2026-03-24 01:35:10.386639 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.386650 | orchestrator | + stateful = (known after apply) 2026-03-24 01:35:10.386660 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.386670 | orchestrator | } 2026-03-24 01:35:10.386681 | orchestrator | 2026-03-24 01:35:10.386691 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-24 01:35:10.386701 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-24 01:35:10.386712 | orchestrator | + all_tags = (known after apply) 2026-03-24 01:35:10.386722 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-24 01:35:10.386732 | orchestrator | + dns_nameservers = [ 2026-03-24 01:35:10.386742 | orchestrator | + "8.8.8.8", 2026-03-24 01:35:10.386753 | orchestrator | + "9.9.9.9", 2026-03-24 01:35:10.386763 | orchestrator | ] 2026-03-24 01:35:10.386773 | orchestrator | + enable_dhcp = true 2026-03-24 01:35:10.386784 | orchestrator | + gateway_ip = (known after apply) 2026-03-24 01:35:10.386801 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.386812 | orchestrator | + ip_version = 4 2026-03-24 01:35:10.386822 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-24 01:35:10.386833 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-24 01:35:10.386852 | orchestrator | + name = "subnet-testbed-management" 2026-03-24 01:35:10.386862 | orchestrator | + network_id = (known after apply) 2026-03-24 01:35:10.386873 | orchestrator | + no_gateway = false 2026-03-24 01:35:10.386883 | orchestrator | + region = (known after apply) 2026-03-24 01:35:10.386894 | orchestrator | + service_types = (known after apply) 2026-03-24 01:35:10.386910 | orchestrator | + tenant_id = (known after apply) 2026-03-24 01:35:10.386920 | orchestrator | 2026-03-24 01:35:10.386931 | orchestrator | + allocation_pool { 2026-03-24 01:35:10.386941 | orchestrator | + end = "192.168.31.250" 2026-03-24 01:35:10.386951 | orchestrator | + start = "192.168.31.200" 2026-03-24 01:35:10.386961 | orchestrator | } 2026-03-24 01:35:10.386971 | orchestrator | } 2026-03-24 01:35:10.386982 | orchestrator | 2026-03-24 01:35:10.386992 | orchestrator | # terraform_data.image will be created 2026-03-24 01:35:10.387002 | orchestrator | + resource "terraform_data" "image" { 2026-03-24 01:35:10.387012 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.387023 | orchestrator | + input = "Ubuntu 24.04" 2026-03-24 01:35:10.387033 | orchestrator | + output = (known after apply) 2026-03-24 01:35:10.387043 | orchestrator | } 2026-03-24 01:35:10.387053 | orchestrator | 2026-03-24 01:35:10.387063 | orchestrator | # terraform_data.image_node will be created 2026-03-24 01:35:10.387073 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-24 01:35:10.387084 | orchestrator | + id = (known after apply) 2026-03-24 01:35:10.387094 | orchestrator | + input = "Ubuntu 24.04" 2026-03-24 01:35:10.387104 | orchestrator | + output = (known after apply) 2026-03-24 01:35:10.387114 | orchestrator | } 2026-03-24 01:35:10.387124 | orchestrator | 2026-03-24 01:35:10.387134 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-24 01:35:10.387144 | orchestrator | 2026-03-24 01:35:10.387155 | orchestrator | Changes to Outputs: 2026-03-24 01:35:10.387165 | orchestrator | + manager_address = (sensitive value) 2026-03-24 01:35:10.387175 | orchestrator | + private_key = (sensitive value) 2026-03-24 01:35:10.620019 | orchestrator | terraform_data.image_node: Creating... 2026-03-24 01:35:10.620411 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=77085f08-825f-526a-2507-d15ecb9dca89] 2026-03-24 01:35:10.620847 | orchestrator | terraform_data.image: Creating... 2026-03-24 01:35:10.621795 | orchestrator | terraform_data.image: Creation complete after 0s [id=7d0767c4-fd69-98fd-d721-be50fc5eb697] 2026-03-24 01:35:10.648376 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-24 01:35:10.648509 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-24 01:35:10.657349 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-24 01:35:10.657430 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-24 01:35:10.669217 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-24 01:35:10.669940 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-24 01:35:10.670658 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-24 01:35:10.672585 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-24 01:35:10.672616 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-24 01:35:10.680367 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-24 01:35:11.206403 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-24 01:35:11.210925 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-24 01:35:11.217162 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-24 01:35:11.218439 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-24 01:35:11.275308 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-24 01:35:11.290415 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-24 01:35:11.743760 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=309c8415-708b-4a10-8257-4e65713a75a3] 2026-03-24 01:35:11.759112 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-24 01:35:14.363493 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=f47182f1-e0cb-4bfc-90df-52f037a6948f] 2026-03-24 01:35:14.371259 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-24 01:35:14.389383 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=1a2e3e3a-174f-4e75-8feb-939a2c61d94b] 2026-03-24 01:35:14.397435 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-24 01:35:14.404111 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=b0876e92-837d-465a-b4f4-3ffe4ea78710] 2026-03-24 01:35:14.408239 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=513f3ae0-646a-4c6d-9e1f-306e5b70376d] 2026-03-24 01:35:14.409836 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-24 01:35:14.411645 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-24 01:35:14.427413 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=b1c01c59-5cc3-4efd-b762-ef9b36f8e82a] 2026-03-24 01:35:14.430458 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=69b3fd8b-3b41-44d2-abc9-ba13d6107c6e] 2026-03-24 01:35:14.433518 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-24 01:35:14.438118 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-24 01:35:14.505020 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=637e3c3b-1b7c-4875-ba1f-929ede49b5d5] 2026-03-24 01:35:14.518247 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-24 01:35:14.524849 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=5253d6e1199497532e6f6612eb5fc271f93485af] 2026-03-24 01:35:14.524949 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=2604bb68-60c6-4ec4-9aac-15d0d9f1349c] 2026-03-24 01:35:14.524975 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=ed299c06-0435-4936-a363-f05696f72d5b] 2026-03-24 01:35:14.536125 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-24 01:35:14.536202 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-24 01:35:14.538288 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=a716d7ed56e60e14edbca5275958aa1111e86d35] 2026-03-24 01:35:15.110799 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=6bbbff7c-b34f-46ab-9339-96e122f5aec5] 2026-03-24 01:35:15.360030 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 0s [id=493b533a-d72b-4790-a93f-34e4fad5caa9] 2026-03-24 01:35:15.369648 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-24 01:35:17.765070 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=85facbe5-74b3-4310-a7db-d9f42aedacb8] 2026-03-24 01:35:17.823055 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=10408dfc-d3b8-4f62-9e98-aca56513cc7c] 2026-03-24 01:35:17.824950 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=8862b49e-6192-4e89-91ad-23c351a2afe9] 2026-03-24 01:35:17.832181 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=2db98c7e-0495-471f-a090-f7de28c85f93] 2026-03-24 01:35:18.088496 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=38806cf6-6731-4d59-a6fd-09ab679ddc88] 2026-03-24 01:35:18.098196 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-24 01:35:18.098445 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-24 01:35:18.099941 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-24 01:35:18.321372 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=8d2eb39b-fdfb-4ea9-896d-1008c20b84d8] 2026-03-24 01:35:18.321474 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a2c5c561-3059-4330-8676-5529744cfb25] 2026-03-24 01:35:18.334857 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-24 01:35:18.336214 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-24 01:35:18.337042 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-24 01:35:18.337270 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-24 01:35:18.338113 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-24 01:35:18.338685 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-24 01:35:18.340876 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-24 01:35:18.488512 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=063919ee-14ee-405a-807c-08e7f14724ba] 2026-03-24 01:35:18.502208 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-24 01:35:18.565676 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=81f4d84b-32eb-4f37-b541-75f58d99deda] 2026-03-24 01:35:18.579073 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-24 01:35:18.976169 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=6f7bc0ca-8e22-476b-94c0-3650aa15c3a7] 2026-03-24 01:35:18.990851 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-24 01:35:19.139023 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6a6bc548-4977-4d72-9ae2-b7e2e3ba2443] 2026-03-24 01:35:19.159624 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-24 01:35:19.173104 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=164530a1-bd95-4d33-b14e-290d2e2f08c9] 2026-03-24 01:35:19.181099 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-24 01:35:19.222363 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=1cd17d61-8ef8-48d5-a4bf-005f38d811e9] 2026-03-24 01:35:19.230171 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 5s [id=f4fc154b-cdf9-4366-8d70-cd811913fdc6] 2026-03-24 01:35:19.232036 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-24 01:35:19.235660 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-24 01:35:19.290626 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=f777c396-f58d-44a7-9bbe-9e29d8850485] 2026-03-24 01:35:19.297999 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-24 01:35:19.371320 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=8afba6ad-5f85-41bd-806a-3a88434e3eae] 2026-03-24 01:35:19.377735 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-24 01:35:19.476090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=eb85cf19-bf6d-421c-a5dd-8c5d530bafa1] 2026-03-24 01:35:19.477220 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=e047efc9-29ab-4fff-8f18-f7621d7b4cb3] 2026-03-24 01:35:19.642708 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=f63b23dd-1cdc-4034-8517-945728c73e01] 2026-03-24 01:35:19.672843 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=7d86b86d-3c4f-4798-848a-4c369da33b8c] 2026-03-24 01:35:19.746432 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=03f7056c-fc54-4f92-85cc-3def75f97f53] 2026-03-24 01:35:19.754980 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=e7c95459-4ff3-450d-87a7-a3aa1efb243e] 2026-03-24 01:35:19.763405 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=676783e2-3dcb-40a9-bdff-38b18d4eacb9] 2026-03-24 01:35:19.789750 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=d9851be3-ea3a-4fae-b320-5f4ff02e34b5] 2026-03-24 01:35:19.942939 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=530b1ea4-7027-44de-8ce7-06f5d09c7bc9] 2026-03-24 01:35:21.693494 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=1db12d80-84ec-417a-ad0b-d07685ca9765] 2026-03-24 01:35:21.714237 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-24 01:35:21.728914 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-24 01:35:21.738041 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-24 01:35:21.743774 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-24 01:35:21.747193 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-24 01:35:21.748870 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-24 01:35:21.766415 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-24 01:35:23.070524 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=88a89a78-a7d3-404c-9744-90e168c99d02] 2026-03-24 01:35:23.081022 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-24 01:35:23.084803 | orchestrator | local_file.inventory: Creating... 2026-03-24 01:35:23.097151 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-24 01:35:23.244519 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=87103b6df0187f819867acb0a7d1544eb46c9007] 2026-03-24 01:35:23.246265 | orchestrator | local_file.inventory: Creation complete after 0s [id=55a2d62379c62033a7ea17d571c0327b7b8069d0] 2026-03-24 01:35:23.801211 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=88a89a78-a7d3-404c-9744-90e168c99d02] 2026-03-24 01:35:31.729710 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-24 01:35:31.750920 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-24 01:35:31.753102 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-24 01:35:31.755382 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-24 01:35:31.755423 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-24 01:35:31.767458 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-24 01:35:41.731114 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-24 01:35:41.751244 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-24 01:35:41.753485 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-24 01:35:41.755972 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-24 01:35:41.756109 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-24 01:35:41.768601 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-24 01:35:42.160390 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=a65cd7ce-2319-472a-a3c7-ad9ff7657e43] 2026-03-24 01:35:42.239230 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=a8acf43b-6fa4-4148-9f43-2745fe346f69] 2026-03-24 01:35:42.479787 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=59b9b47a-b075-4ec4-a037-20ed9af0da1b] 2026-03-24 01:35:51.759333 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-24 01:35:51.759416 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-24 01:35:51.759424 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-24 01:35:52.453348 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=5c689d57-d9f7-4ffe-87f6-dd8a9492a5c0] 2026-03-24 01:35:52.503392 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=172a9258-62df-4d2c-9dd0-b7f57eb38c40] 2026-03-24 01:35:52.560157 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=bf25b23d-eab9-4368-b867-1f9339555f51] 2026-03-24 01:35:52.589980 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-24 01:35:52.591877 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-24 01:35:52.591927 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-24 01:35:52.596034 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7970628391773171237] 2026-03-24 01:35:52.596577 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-24 01:35:52.597431 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-24 01:35:52.597476 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-24 01:35:52.597883 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-24 01:35:52.604354 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-24 01:35:52.617844 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-24 01:35:52.631853 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-24 01:35:52.634849 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-24 01:35:55.978568 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=172a9258-62df-4d2c-9dd0-b7f57eb38c40/f47182f1-e0cb-4bfc-90df-52f037a6948f] 2026-03-24 01:35:55.996281 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=bf25b23d-eab9-4368-b867-1f9339555f51/b1c01c59-5cc3-4efd-b762-ef9b36f8e82a] 2026-03-24 01:35:56.006910 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=59b9b47a-b075-4ec4-a037-20ed9af0da1b/1a2e3e3a-174f-4e75-8feb-939a2c61d94b] 2026-03-24 01:35:56.025803 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=172a9258-62df-4d2c-9dd0-b7f57eb38c40/ed299c06-0435-4936-a363-f05696f72d5b] 2026-03-24 01:35:56.054327 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=bf25b23d-eab9-4368-b867-1f9339555f51/69b3fd8b-3b41-44d2-abc9-ba13d6107c6e] 2026-03-24 01:35:56.061393 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=59b9b47a-b075-4ec4-a037-20ed9af0da1b/2604bb68-60c6-4ec4-9aac-15d0d9f1349c] 2026-03-24 01:36:02.156094 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=172a9258-62df-4d2c-9dd0-b7f57eb38c40/513f3ae0-646a-4c6d-9e1f-306e5b70376d] 2026-03-24 01:36:02.166656 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=bf25b23d-eab9-4368-b867-1f9339555f51/637e3c3b-1b7c-4875-ba1f-929ede49b5d5] 2026-03-24 01:36:02.193798 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=59b9b47a-b075-4ec4-a037-20ed9af0da1b/b0876e92-837d-465a-b4f4-3ffe4ea78710] 2026-03-24 01:36:02.636083 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-24 01:36:12.636727 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-24 01:36:12.941406 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=ca3b23aa-fa19-488e-b4f5-5976516563ba] 2026-03-24 01:36:12.953896 | orchestrator | 2026-03-24 01:36:12.954080 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-24 01:36:12.954112 | orchestrator | 2026-03-24 01:36:12.954134 | orchestrator | Outputs: 2026-03-24 01:36:12.954154 | orchestrator | 2026-03-24 01:36:12.954174 | orchestrator | manager_address = 2026-03-24 01:36:12.954195 | orchestrator | private_key = 2026-03-24 01:36:13.036413 | orchestrator | ok: Runtime: 0:01:08.064652 2026-03-24 01:36:13.068743 | 2026-03-24 01:36:13.068928 | TASK [Fetch manager address] 2026-03-24 01:36:13.515251 | orchestrator | ok 2026-03-24 01:36:13.522750 | 2026-03-24 01:36:13.522902 | TASK [Set manager_host address] 2026-03-24 01:36:13.597280 | orchestrator | ok 2026-03-24 01:36:13.607315 | 2026-03-24 01:36:13.607469 | LOOP [Update ansible collections] 2026-03-24 01:36:14.525706 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-24 01:36:14.526088 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-24 01:36:14.526154 | orchestrator | Starting galaxy collection install process 2026-03-24 01:36:14.526196 | orchestrator | Process install dependency map 2026-03-24 01:36:14.526232 | orchestrator | Starting collection install process 2026-03-24 01:36:14.526266 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-24 01:36:14.526303 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-24 01:36:14.526345 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-24 01:36:14.526447 | orchestrator | ok: Item: commons Runtime: 0:00:00.581437 2026-03-24 01:36:15.428062 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-24 01:36:15.428271 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-24 01:36:15.429008 | orchestrator | Starting galaxy collection install process 2026-03-24 01:36:15.429064 | orchestrator | Process install dependency map 2026-03-24 01:36:15.429106 | orchestrator | Starting collection install process 2026-03-24 01:36:15.429144 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-24 01:36:15.429183 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-24 01:36:15.429220 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-24 01:36:15.429279 | orchestrator | ok: Item: services Runtime: 0:00:00.613501 2026-03-24 01:36:15.451599 | 2026-03-24 01:36:15.451762 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-24 01:36:26.036699 | orchestrator | ok 2026-03-24 01:36:26.047212 | 2026-03-24 01:36:26.047358 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-24 01:37:26.094718 | orchestrator | ok 2026-03-24 01:37:26.107346 | 2026-03-24 01:37:26.107569 | TASK [Fetch manager ssh hostkey] 2026-03-24 01:37:27.707523 | orchestrator | Output suppressed because no_log was given 2026-03-24 01:37:27.723545 | 2026-03-24 01:37:27.723735 | TASK [Get ssh keypair from terraform environment] 2026-03-24 01:37:28.261950 | orchestrator | ok: Runtime: 0:00:00.008116 2026-03-24 01:37:28.278953 | 2026-03-24 01:37:28.279140 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-24 01:37:28.328165 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-24 01:37:28.338203 | 2026-03-24 01:37:28.338332 | TASK [Run manager part 0] 2026-03-24 01:37:29.283360 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-24 01:37:29.334822 | orchestrator | 2026-03-24 01:37:29.334889 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-24 01:37:29.334899 | orchestrator | 2026-03-24 01:37:29.334918 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-24 01:37:30.924277 | orchestrator | ok: [testbed-manager] 2026-03-24 01:37:30.924387 | orchestrator | 2026-03-24 01:37:30.924420 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-24 01:37:30.924434 | orchestrator | 2026-03-24 01:37:30.924447 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:37:32.808578 | orchestrator | ok: [testbed-manager] 2026-03-24 01:37:32.808666 | orchestrator | 2026-03-24 01:37:32.808691 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-24 01:37:33.477567 | orchestrator | ok: [testbed-manager] 2026-03-24 01:37:33.477654 | orchestrator | 2026-03-24 01:37:33.477672 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-24 01:37:33.524864 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:37:33.524926 | orchestrator | 2026-03-24 01:37:33.524936 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-24 01:37:33.571383 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:37:33.571469 | orchestrator | 2026-03-24 01:37:33.571479 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-24 01:37:33.611703 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:37:33.611802 | orchestrator | 2026-03-24 01:37:33.611819 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-24 01:37:33.653682 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:37:33.653771 | orchestrator | 2026-03-24 01:37:33.653785 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-24 01:37:33.705026 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:37:33.705082 | orchestrator | 2026-03-24 01:37:33.705091 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-24 01:37:33.748574 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:37:33.748634 | orchestrator | 2026-03-24 01:37:33.748645 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-24 01:37:33.786737 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:37:33.786792 | orchestrator | 2026-03-24 01:37:33.786799 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-24 01:37:34.478612 | orchestrator | changed: [testbed-manager] 2026-03-24 01:37:34.478698 | orchestrator | 2026-03-24 01:37:34.478710 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-24 01:40:11.764881 | orchestrator | changed: [testbed-manager] 2026-03-24 01:40:11.764944 | orchestrator | 2026-03-24 01:40:11.764959 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-24 01:41:25.626477 | orchestrator | changed: [testbed-manager] 2026-03-24 01:41:25.626610 | orchestrator | 2026-03-24 01:41:25.626638 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-24 01:41:46.614260 | orchestrator | changed: [testbed-manager] 2026-03-24 01:41:46.614355 | orchestrator | 2026-03-24 01:41:46.614370 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-24 01:41:54.860273 | orchestrator | changed: [testbed-manager] 2026-03-24 01:41:54.860380 | orchestrator | 2026-03-24 01:41:54.860397 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-24 01:41:54.908673 | orchestrator | ok: [testbed-manager] 2026-03-24 01:41:54.908795 | orchestrator | 2026-03-24 01:41:54.908823 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-24 01:41:55.772121 | orchestrator | ok: [testbed-manager] 2026-03-24 01:41:55.772204 | orchestrator | 2026-03-24 01:41:55.772218 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-24 01:41:56.480529 | orchestrator | changed: [testbed-manager] 2026-03-24 01:41:56.480735 | orchestrator | 2026-03-24 01:41:56.480763 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-24 01:42:02.614060 | orchestrator | changed: [testbed-manager] 2026-03-24 01:42:02.614154 | orchestrator | 2026-03-24 01:42:02.614188 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-24 01:42:08.391368 | orchestrator | changed: [testbed-manager] 2026-03-24 01:42:08.392279 | orchestrator | 2026-03-24 01:42:08.392311 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-24 01:42:10.976142 | orchestrator | changed: [testbed-manager] 2026-03-24 01:42:10.976203 | orchestrator | 2026-03-24 01:42:10.976212 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-24 01:42:12.630450 | orchestrator | changed: [testbed-manager] 2026-03-24 01:42:12.630543 | orchestrator | 2026-03-24 01:42:12.630568 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-24 01:42:13.669745 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-24 01:42:13.669847 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-24 01:42:13.669863 | orchestrator | 2026-03-24 01:42:13.669877 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-24 01:42:13.714253 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-24 01:42:13.714355 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-24 01:42:13.714376 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-24 01:42:13.714396 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-24 01:42:16.973488 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-24 01:42:16.973580 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-24 01:42:16.973595 | orchestrator | 2026-03-24 01:42:16.973608 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-24 01:42:17.510281 | orchestrator | changed: [testbed-manager] 2026-03-24 01:42:17.510377 | orchestrator | 2026-03-24 01:42:17.510393 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-24 01:43:36.811023 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-24 01:43:36.811139 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-24 01:43:36.811159 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-24 01:43:36.811172 | orchestrator | 2026-03-24 01:43:36.811185 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-24 01:43:39.048403 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-24 01:43:39.048551 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-24 01:43:39.048579 | orchestrator | 2026-03-24 01:43:39.048599 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-24 01:43:39.048618 | orchestrator | 2026-03-24 01:43:39.048637 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:43:40.396312 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:40.396377 | orchestrator | 2026-03-24 01:43:40.396385 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-24 01:43:40.437868 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:40.437954 | orchestrator | 2026-03-24 01:43:40.437970 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-24 01:43:40.510326 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:40.510425 | orchestrator | 2026-03-24 01:43:40.510442 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-24 01:43:41.248331 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:41.248430 | orchestrator | 2026-03-24 01:43:41.248447 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-24 01:43:41.981425 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:41.982195 | orchestrator | 2026-03-24 01:43:41.982260 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-24 01:43:43.337521 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-24 01:43:43.337598 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-24 01:43:43.337606 | orchestrator | 2026-03-24 01:43:43.337623 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-24 01:43:44.663642 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:44.663775 | orchestrator | 2026-03-24 01:43:44.663796 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-24 01:43:46.411232 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-24 01:43:46.411273 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-24 01:43:46.411279 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-24 01:43:46.411285 | orchestrator | 2026-03-24 01:43:46.411292 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-24 01:43:46.480357 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:46.480407 | orchestrator | 2026-03-24 01:43:46.480416 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-24 01:43:46.559183 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:46.559224 | orchestrator | 2026-03-24 01:43:46.559233 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-24 01:43:47.132135 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:47.132223 | orchestrator | 2026-03-24 01:43:47.132237 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-24 01:43:47.201700 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:47.201759 | orchestrator | 2026-03-24 01:43:47.201765 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-24 01:43:48.052960 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-24 01:43:48.053065 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:48.053085 | orchestrator | 2026-03-24 01:43:48.053100 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-24 01:43:48.087021 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:48.087160 | orchestrator | 2026-03-24 01:43:48.087178 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-24 01:43:48.130678 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:48.130766 | orchestrator | 2026-03-24 01:43:48.130779 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-24 01:43:48.170087 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:48.170176 | orchestrator | 2026-03-24 01:43:48.170193 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-24 01:43:48.239493 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:48.239615 | orchestrator | 2026-03-24 01:43:48.239631 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-24 01:43:48.953616 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:48.953723 | orchestrator | 2026-03-24 01:43:48.953748 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-24 01:43:48.953769 | orchestrator | 2026-03-24 01:43:48.953787 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:43:50.326388 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:50.326550 | orchestrator | 2026-03-24 01:43:50.326579 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-24 01:43:51.268302 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:51.268434 | orchestrator | 2026-03-24 01:43:51.268463 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:43:51.268485 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-24 01:43:51.268576 | orchestrator | 2026-03-24 01:43:51.619964 | orchestrator | ok: Runtime: 0:06:22.743720 2026-03-24 01:43:51.635333 | 2026-03-24 01:43:51.635498 | TASK [Point out that the log in on the manager is now possible] 2026-03-24 01:43:51.689767 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-24 01:43:51.697023 | 2026-03-24 01:43:51.697160 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-24 01:43:51.728525 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-24 01:43:51.735766 | 2026-03-24 01:43:51.735889 | TASK [Run manager part 1 + 2] 2026-03-24 01:43:52.632550 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-24 01:43:52.700178 | orchestrator | 2026-03-24 01:43:52.700232 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-24 01:43:52.700239 | orchestrator | 2026-03-24 01:43:52.700253 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:43:55.634647 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:55.634697 | orchestrator | 2026-03-24 01:43:55.634719 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-24 01:43:55.672597 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:55.672654 | orchestrator | 2026-03-24 01:43:55.672664 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-24 01:43:55.715100 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:55.715154 | orchestrator | 2026-03-24 01:43:55.715166 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-24 01:43:55.756234 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:55.756288 | orchestrator | 2026-03-24 01:43:55.756297 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-24 01:43:55.840338 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:55.840398 | orchestrator | 2026-03-24 01:43:55.840408 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-24 01:43:55.913391 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:55.913453 | orchestrator | 2026-03-24 01:43:55.913463 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-24 01:43:55.969314 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-24 01:43:55.969358 | orchestrator | 2026-03-24 01:43:55.969364 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-24 01:43:56.683772 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:56.683826 | orchestrator | 2026-03-24 01:43:56.683835 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-24 01:43:56.737604 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:43:56.737655 | orchestrator | 2026-03-24 01:43:56.737662 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-24 01:43:58.090968 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:58.091023 | orchestrator | 2026-03-24 01:43:58.091031 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-24 01:43:58.646980 | orchestrator | ok: [testbed-manager] 2026-03-24 01:43:58.647037 | orchestrator | 2026-03-24 01:43:58.647045 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-24 01:43:59.736812 | orchestrator | changed: [testbed-manager] 2026-03-24 01:43:59.736865 | orchestrator | 2026-03-24 01:43:59.736875 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-24 01:44:14.429943 | orchestrator | changed: [testbed-manager] 2026-03-24 01:44:14.430095 | orchestrator | 2026-03-24 01:44:14.430117 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-24 01:44:15.130792 | orchestrator | ok: [testbed-manager] 2026-03-24 01:44:15.130909 | orchestrator | 2026-03-24 01:44:15.130936 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-24 01:44:15.216391 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:44:15.216503 | orchestrator | 2026-03-24 01:44:15.216561 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-24 01:44:16.155865 | orchestrator | changed: [testbed-manager] 2026-03-24 01:44:16.155907 | orchestrator | 2026-03-24 01:44:16.155916 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-24 01:44:17.096601 | orchestrator | changed: [testbed-manager] 2026-03-24 01:44:17.096698 | orchestrator | 2026-03-24 01:44:17.096717 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-24 01:44:17.642428 | orchestrator | changed: [testbed-manager] 2026-03-24 01:44:17.642468 | orchestrator | 2026-03-24 01:44:17.642476 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-24 01:44:17.684438 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-24 01:44:17.684614 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-24 01:44:17.684645 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-24 01:44:17.684667 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-24 01:44:19.708807 | orchestrator | changed: [testbed-manager] 2026-03-24 01:44:19.708856 | orchestrator | 2026-03-24 01:44:19.708865 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-24 01:44:28.420809 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-24 01:44:28.421060 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-24 01:44:28.421081 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-24 01:44:28.421093 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-24 01:44:28.421114 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-24 01:44:28.421124 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-24 01:44:28.421135 | orchestrator | 2026-03-24 01:44:28.421146 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-24 01:44:29.444791 | orchestrator | changed: [testbed-manager] 2026-03-24 01:44:29.444894 | orchestrator | 2026-03-24 01:44:29.444925 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-24 01:44:29.481319 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:44:29.481432 | orchestrator | 2026-03-24 01:44:29.481468 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-24 01:44:32.474896 | orchestrator | changed: [testbed-manager] 2026-03-24 01:44:32.475672 | orchestrator | 2026-03-24 01:44:32.475694 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-24 01:44:32.520830 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:44:32.520923 | orchestrator | 2026-03-24 01:44:32.520940 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-24 01:46:05.337058 | orchestrator | changed: [testbed-manager] 2026-03-24 01:46:05.337121 | orchestrator | 2026-03-24 01:46:05.337129 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-24 01:46:06.390554 | orchestrator | ok: [testbed-manager] 2026-03-24 01:46:06.390628 | orchestrator | 2026-03-24 01:46:06.390652 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:46:06.390705 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-24 01:46:06.390715 | orchestrator | 2026-03-24 01:46:06.875538 | orchestrator | ok: Runtime: 0:02:14.430960 2026-03-24 01:46:06.892955 | 2026-03-24 01:46:06.893098 | TASK [Reboot manager] 2026-03-24 01:46:08.429387 | orchestrator | ok: Runtime: 0:00:00.921910 2026-03-24 01:46:08.447512 | 2026-03-24 01:46:08.447724 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-24 01:46:22.258823 | orchestrator | ok 2026-03-24 01:46:22.270894 | 2026-03-24 01:46:22.271047 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-24 01:47:22.321810 | orchestrator | ok 2026-03-24 01:47:22.332057 | 2026-03-24 01:47:22.332192 | TASK [Deploy manager + bootstrap nodes] 2026-03-24 01:47:24.726231 | orchestrator | 2026-03-24 01:47:24.726470 | orchestrator | # DEPLOY MANAGER 2026-03-24 01:47:24.726493 | orchestrator | 2026-03-24 01:47:24.726508 | orchestrator | + set -e 2026-03-24 01:47:24.726523 | orchestrator | + echo 2026-03-24 01:47:24.726538 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-24 01:47:24.726555 | orchestrator | + echo 2026-03-24 01:47:24.726602 | orchestrator | + cat /opt/manager-vars.sh 2026-03-24 01:47:24.729503 | orchestrator | export NUMBER_OF_NODES=6 2026-03-24 01:47:24.729582 | orchestrator | 2026-03-24 01:47:24.729600 | orchestrator | export CEPH_VERSION=reef 2026-03-24 01:47:24.729616 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-24 01:47:24.729629 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-24 01:47:24.729657 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-24 01:47:24.729669 | orchestrator | 2026-03-24 01:47:24.729688 | orchestrator | export ARA=false 2026-03-24 01:47:24.729700 | orchestrator | export DEPLOY_MODE=manager 2026-03-24 01:47:24.729718 | orchestrator | export TEMPEST=false 2026-03-24 01:47:24.729731 | orchestrator | export IS_ZUUL=true 2026-03-24 01:47:24.729742 | orchestrator | 2026-03-24 01:47:24.729799 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 01:47:24.729812 | orchestrator | export EXTERNAL_API=false 2026-03-24 01:47:24.729823 | orchestrator | 2026-03-24 01:47:24.729835 | orchestrator | export IMAGE_USER=ubuntu 2026-03-24 01:47:24.729850 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-24 01:47:24.729861 | orchestrator | 2026-03-24 01:47:24.729873 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-24 01:47:24.729896 | orchestrator | 2026-03-24 01:47:24.729908 | orchestrator | + echo 2026-03-24 01:47:24.729925 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 01:47:24.730504 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 01:47:24.730526 | orchestrator | ++ INTERACTIVE=false 2026-03-24 01:47:24.730538 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 01:47:24.730551 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 01:47:24.730567 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 01:47:24.730578 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 01:47:24.730590 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 01:47:24.730605 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 01:47:24.730617 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 01:47:24.730628 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 01:47:24.730640 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 01:47:24.730651 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 01:47:24.730662 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 01:47:24.730673 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 01:47:24.730696 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 01:47:24.730924 | orchestrator | ++ export ARA=false 2026-03-24 01:47:24.731022 | orchestrator | ++ ARA=false 2026-03-24 01:47:24.731083 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 01:47:24.731099 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 01:47:24.731110 | orchestrator | ++ export TEMPEST=false 2026-03-24 01:47:24.731124 | orchestrator | ++ TEMPEST=false 2026-03-24 01:47:24.731135 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 01:47:24.731146 | orchestrator | ++ IS_ZUUL=true 2026-03-24 01:47:24.731159 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 01:47:24.731171 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 01:47:24.731184 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 01:47:24.731195 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 01:47:24.731206 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 01:47:24.731218 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 01:47:24.731230 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 01:47:24.731241 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 01:47:24.731254 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 01:47:24.731265 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 01:47:24.731277 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-24 01:47:24.781486 | orchestrator | + docker version 2026-03-24 01:47:24.886520 | orchestrator | Client: Docker Engine - Community 2026-03-24 01:47:24.886619 | orchestrator | Version: 27.5.1 2026-03-24 01:47:24.886635 | orchestrator | API version: 1.47 2026-03-24 01:47:24.886646 | orchestrator | Go version: go1.22.11 2026-03-24 01:47:24.886656 | orchestrator | Git commit: 9f9e405 2026-03-24 01:47:24.886667 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-24 01:47:24.886678 | orchestrator | OS/Arch: linux/amd64 2026-03-24 01:47:24.886688 | orchestrator | Context: default 2026-03-24 01:47:24.886699 | orchestrator | 2026-03-24 01:47:24.886709 | orchestrator | Server: Docker Engine - Community 2026-03-24 01:47:24.886720 | orchestrator | Engine: 2026-03-24 01:47:24.886730 | orchestrator | Version: 27.5.1 2026-03-24 01:47:24.886741 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-24 01:47:24.886835 | orchestrator | Go version: go1.22.11 2026-03-24 01:47:24.886847 | orchestrator | Git commit: 4c9b3b0 2026-03-24 01:47:24.886857 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-24 01:47:24.886874 | orchestrator | OS/Arch: linux/amd64 2026-03-24 01:47:24.886891 | orchestrator | Experimental: false 2026-03-24 01:47:24.886907 | orchestrator | containerd: 2026-03-24 01:47:24.886924 | orchestrator | Version: v2.2.2 2026-03-24 01:47:24.886941 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-24 01:47:24.886957 | orchestrator | runc: 2026-03-24 01:47:24.886973 | orchestrator | Version: 1.3.4 2026-03-24 01:47:24.886988 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-24 01:47:24.887005 | orchestrator | docker-init: 2026-03-24 01:47:24.887020 | orchestrator | Version: 0.19.0 2026-03-24 01:47:24.887037 | orchestrator | GitCommit: de40ad0 2026-03-24 01:47:24.888786 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-24 01:47:24.896286 | orchestrator | + set -e 2026-03-24 01:47:24.896368 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 01:47:24.896382 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 01:47:24.896393 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 01:47:24.896403 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 01:47:24.896413 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 01:47:24.896424 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 01:47:24.896435 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 01:47:24.896445 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 01:47:24.896456 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 01:47:24.896466 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 01:47:24.896476 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 01:47:24.896486 | orchestrator | ++ export ARA=false 2026-03-24 01:47:24.896497 | orchestrator | ++ ARA=false 2026-03-24 01:47:24.896507 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 01:47:24.896517 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 01:47:24.896527 | orchestrator | ++ export TEMPEST=false 2026-03-24 01:47:24.896538 | orchestrator | ++ TEMPEST=false 2026-03-24 01:47:24.896548 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 01:47:24.896558 | orchestrator | ++ IS_ZUUL=true 2026-03-24 01:47:24.896568 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 01:47:24.896578 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 01:47:24.896588 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 01:47:24.896598 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 01:47:24.896608 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 01:47:24.896618 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 01:47:24.896629 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 01:47:24.896639 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 01:47:24.896649 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 01:47:24.896665 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 01:47:24.896683 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 01:47:24.896699 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 01:47:24.896715 | orchestrator | ++ INTERACTIVE=false 2026-03-24 01:47:24.896730 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 01:47:24.896775 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 01:47:24.896796 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-24 01:47:24.896814 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-24 01:47:24.903843 | orchestrator | + set -e 2026-03-24 01:47:24.903925 | orchestrator | + VERSION=9.5.0 2026-03-24 01:47:24.903942 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-24 01:47:24.910878 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-24 01:47:24.910957 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-24 01:47:24.915127 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-24 01:47:24.918572 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-24 01:47:24.926099 | orchestrator | /opt/configuration ~ 2026-03-24 01:47:24.926174 | orchestrator | + set -e 2026-03-24 01:47:24.926186 | orchestrator | + pushd /opt/configuration 2026-03-24 01:47:24.926198 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 01:47:24.928606 | orchestrator | + source /opt/venv/bin/activate 2026-03-24 01:47:24.929497 | orchestrator | ++ deactivate nondestructive 2026-03-24 01:47:24.929531 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:24.929552 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:24.929598 | orchestrator | ++ hash -r 2026-03-24 01:47:24.929622 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:24.929638 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-24 01:47:24.929653 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-24 01:47:24.929668 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-24 01:47:24.929713 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-24 01:47:24.929724 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-24 01:47:24.929734 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-24 01:47:24.929743 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-24 01:47:24.929786 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 01:47:24.929874 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 01:47:24.929888 | orchestrator | ++ export PATH 2026-03-24 01:47:24.929902 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:24.929982 | orchestrator | ++ '[' -z '' ']' 2026-03-24 01:47:24.929994 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-24 01:47:24.930006 | orchestrator | ++ PS1='(venv) ' 2026-03-24 01:47:24.930046 | orchestrator | ++ export PS1 2026-03-24 01:47:24.930095 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-24 01:47:24.930106 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-24 01:47:24.930265 | orchestrator | ++ hash -r 2026-03-24 01:47:24.930279 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-24 01:47:25.857409 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-24 01:47:25.857884 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-24 01:47:25.859253 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-24 01:47:25.860548 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-24 01:47:25.861680 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-24 01:47:25.871206 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-24 01:47:25.872689 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-24 01:47:25.873678 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-24 01:47:25.874785 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-24 01:47:25.903365 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-24 01:47:25.904273 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-24 01:47:25.905960 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-24 01:47:25.907344 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-24 01:47:25.910985 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-24 01:47:26.102111 | orchestrator | ++ which gilt 2026-03-24 01:47:26.104203 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-24 01:47:26.104281 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-24 01:47:26.331546 | orchestrator | osism.cfg-generics: 2026-03-24 01:47:26.480042 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-24 01:47:26.480174 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-24 01:47:26.480380 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-24 01:47:26.480478 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-24 01:47:27.150831 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-24 01:47:27.160746 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-24 01:47:27.509823 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-24 01:47:27.562919 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 01:47:27.563050 | orchestrator | + deactivate 2026-03-24 01:47:27.563075 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-24 01:47:27.563097 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 01:47:27.563115 | orchestrator | + export PATH 2026-03-24 01:47:27.563134 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-24 01:47:27.563153 | orchestrator | + '[' -n '' ']' 2026-03-24 01:47:27.563174 | orchestrator | + hash -r 2026-03-24 01:47:27.563193 | orchestrator | + '[' -n '' ']' 2026-03-24 01:47:27.563213 | orchestrator | + unset VIRTUAL_ENV 2026-03-24 01:47:27.563254 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-24 01:47:27.563286 | orchestrator | ~ 2026-03-24 01:47:27.563307 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-24 01:47:27.563326 | orchestrator | + unset -f deactivate 2026-03-24 01:47:27.563344 | orchestrator | + popd 2026-03-24 01:47:27.564913 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-24 01:47:27.564999 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-24 01:47:27.565680 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-24 01:47:27.627551 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 01:47:27.627642 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-24 01:47:27.628665 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-24 01:47:27.689421 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-24 01:47:27.690294 | orchestrator | ++ semver 2024.2 2025.1 2026-03-24 01:47:27.750540 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-24 01:47:27.750624 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-24 01:47:27.839210 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 01:47:27.839302 | orchestrator | + source /opt/venv/bin/activate 2026-03-24 01:47:27.839313 | orchestrator | ++ deactivate nondestructive 2026-03-24 01:47:27.839321 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:27.839329 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:27.839337 | orchestrator | ++ hash -r 2026-03-24 01:47:27.839344 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:27.839352 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-24 01:47:27.839359 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-24 01:47:27.839366 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-24 01:47:27.839384 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-24 01:47:27.839392 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-24 01:47:27.839400 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-24 01:47:27.839406 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-24 01:47:27.839414 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 01:47:27.839535 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 01:47:27.839547 | orchestrator | ++ export PATH 2026-03-24 01:47:27.839590 | orchestrator | ++ '[' -n '' ']' 2026-03-24 01:47:27.839596 | orchestrator | ++ '[' -z '' ']' 2026-03-24 01:47:27.839601 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-24 01:47:27.839709 | orchestrator | ++ PS1='(venv) ' 2026-03-24 01:47:27.839779 | orchestrator | ++ export PS1 2026-03-24 01:47:27.839786 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-24 01:47:27.839790 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-24 01:47:27.839795 | orchestrator | ++ hash -r 2026-03-24 01:47:27.840008 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-24 01:47:28.800725 | orchestrator | 2026-03-24 01:47:28.800882 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-24 01:47:28.800900 | orchestrator | 2026-03-24 01:47:28.800913 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-24 01:47:29.362452 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:29.362586 | orchestrator | 2026-03-24 01:47:29.362612 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-24 01:47:30.310084 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:30.310187 | orchestrator | 2026-03-24 01:47:30.310214 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-24 01:47:30.310271 | orchestrator | 2026-03-24 01:47:30.310293 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:47:32.572446 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:32.572565 | orchestrator | 2026-03-24 01:47:32.572580 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-24 01:47:32.614296 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:32.614413 | orchestrator | 2026-03-24 01:47:32.614432 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-24 01:47:33.077622 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:33.077816 | orchestrator | 2026-03-24 01:47:33.077845 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-24 01:47:33.120946 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:47:33.121035 | orchestrator | 2026-03-24 01:47:33.121051 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-24 01:47:33.454472 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:33.454564 | orchestrator | 2026-03-24 01:47:33.454577 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-24 01:47:33.782591 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:33.782695 | orchestrator | 2026-03-24 01:47:33.782726 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-24 01:47:33.904738 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:47:33.904840 | orchestrator | 2026-03-24 01:47:33.904850 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-24 01:47:33.904858 | orchestrator | 2026-03-24 01:47:33.904865 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:47:35.587327 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:35.587455 | orchestrator | 2026-03-24 01:47:35.587474 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-24 01:47:35.681366 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-24 01:47:35.681491 | orchestrator | 2026-03-24 01:47:35.681514 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-24 01:47:35.732656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-24 01:47:35.732754 | orchestrator | 2026-03-24 01:47:35.732853 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-24 01:47:36.705171 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-24 01:47:36.705276 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-24 01:47:36.705291 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-24 01:47:36.705304 | orchestrator | 2026-03-24 01:47:36.705320 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-24 01:47:38.281321 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-24 01:47:38.281459 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-24 01:47:38.281477 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-24 01:47:38.281491 | orchestrator | 2026-03-24 01:47:38.281504 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-24 01:47:38.866300 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-24 01:47:38.866404 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:38.866422 | orchestrator | 2026-03-24 01:47:38.866436 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-24 01:47:39.455353 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-24 01:47:39.455457 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:39.455475 | orchestrator | 2026-03-24 01:47:39.455489 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-24 01:47:39.511982 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:47:39.512080 | orchestrator | 2026-03-24 01:47:39.512093 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-24 01:47:39.856182 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:39.856272 | orchestrator | 2026-03-24 01:47:39.856284 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-24 01:47:39.927453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-24 01:47:39.927550 | orchestrator | 2026-03-24 01:47:39.927567 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-24 01:47:40.884474 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:40.884565 | orchestrator | 2026-03-24 01:47:40.884578 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-24 01:47:41.588448 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:41.588555 | orchestrator | 2026-03-24 01:47:41.588572 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-24 01:47:51.582250 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:51.582402 | orchestrator | 2026-03-24 01:47:51.582434 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-24 01:47:51.639073 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:47:51.639172 | orchestrator | 2026-03-24 01:47:51.639210 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-24 01:47:51.639224 | orchestrator | 2026-03-24 01:47:51.639234 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:47:53.378120 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:53.378252 | orchestrator | 2026-03-24 01:47:53.378271 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-24 01:47:53.487767 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-24 01:47:53.487932 | orchestrator | 2026-03-24 01:47:53.487950 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-24 01:47:53.544442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 01:47:53.544552 | orchestrator | 2026-03-24 01:47:53.544579 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-24 01:47:55.795142 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:55.795232 | orchestrator | 2026-03-24 01:47:55.795245 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-24 01:47:55.851317 | orchestrator | ok: [testbed-manager] 2026-03-24 01:47:55.851392 | orchestrator | 2026-03-24 01:47:55.851400 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-24 01:47:55.972079 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-24 01:47:55.972213 | orchestrator | 2026-03-24 01:47:55.972230 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-24 01:47:58.691596 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-24 01:47:58.691706 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-24 01:47:58.691722 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-24 01:47:58.691734 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-24 01:47:58.691745 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-24 01:47:58.691756 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-24 01:47:58.691767 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-24 01:47:58.691778 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-24 01:47:58.691838 | orchestrator | 2026-03-24 01:47:58.691850 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-24 01:47:59.284306 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:59.284473 | orchestrator | 2026-03-24 01:47:59.284507 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-24 01:47:59.885104 | orchestrator | changed: [testbed-manager] 2026-03-24 01:47:59.885194 | orchestrator | 2026-03-24 01:47:59.885208 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-24 01:47:59.970162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-24 01:47:59.970233 | orchestrator | 2026-03-24 01:47:59.970240 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-24 01:48:01.167309 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-24 01:48:01.167415 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-24 01:48:01.167431 | orchestrator | 2026-03-24 01:48:01.167444 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-24 01:48:01.785311 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:01.785414 | orchestrator | 2026-03-24 01:48:01.785432 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-24 01:48:01.840167 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:48:01.840248 | orchestrator | 2026-03-24 01:48:01.840258 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-24 01:48:01.916962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-24 01:48:01.917055 | orchestrator | 2026-03-24 01:48:01.917067 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-24 01:48:02.529752 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:02.529952 | orchestrator | 2026-03-24 01:48:02.529977 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-24 01:48:02.593370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-24 01:48:02.593492 | orchestrator | 2026-03-24 01:48:02.593518 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-24 01:48:03.921092 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-24 01:48:03.921224 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-24 01:48:03.921241 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:03.921253 | orchestrator | 2026-03-24 01:48:03.921265 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-24 01:48:04.512964 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:04.513067 | orchestrator | 2026-03-24 01:48:04.513084 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-24 01:48:04.566510 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:48:04.566613 | orchestrator | 2026-03-24 01:48:04.566629 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-24 01:48:04.658146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-24 01:48:04.658232 | orchestrator | 2026-03-24 01:48:04.658244 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-24 01:48:05.166615 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:05.166719 | orchestrator | 2026-03-24 01:48:05.166735 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-24 01:48:05.547717 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:05.547880 | orchestrator | 2026-03-24 01:48:05.547898 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-24 01:48:06.601348 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-24 01:48:06.601451 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-24 01:48:06.601466 | orchestrator | 2026-03-24 01:48:06.601480 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-24 01:48:07.194314 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:07.194400 | orchestrator | 2026-03-24 01:48:07.194412 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-24 01:48:07.556992 | orchestrator | ok: [testbed-manager] 2026-03-24 01:48:07.557098 | orchestrator | 2026-03-24 01:48:07.557115 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-24 01:48:07.904414 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:07.904519 | orchestrator | 2026-03-24 01:48:07.904534 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-24 01:48:07.944061 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:48:07.944155 | orchestrator | 2026-03-24 01:48:07.944170 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-24 01:48:08.005395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-24 01:48:08.005521 | orchestrator | 2026-03-24 01:48:08.005537 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-24 01:48:08.057994 | orchestrator | ok: [testbed-manager] 2026-03-24 01:48:08.058187 | orchestrator | 2026-03-24 01:48:08.058218 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-24 01:48:10.011824 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-24 01:48:10.011955 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-24 01:48:10.011982 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-24 01:48:10.012002 | orchestrator | 2026-03-24 01:48:10.012021 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-24 01:48:10.644938 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:10.645107 | orchestrator | 2026-03-24 01:48:10.645138 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-24 01:48:11.287244 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:11.287328 | orchestrator | 2026-03-24 01:48:11.287340 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-24 01:48:11.966317 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:11.966423 | orchestrator | 2026-03-24 01:48:11.966439 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-24 01:48:12.037356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-24 01:48:12.037464 | orchestrator | 2026-03-24 01:48:12.037483 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-24 01:48:12.080076 | orchestrator | ok: [testbed-manager] 2026-03-24 01:48:12.080178 | orchestrator | 2026-03-24 01:48:12.080195 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-24 01:48:12.752924 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-24 01:48:12.753032 | orchestrator | 2026-03-24 01:48:12.753048 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-24 01:48:12.835902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-24 01:48:12.835997 | orchestrator | 2026-03-24 01:48:12.836011 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-24 01:48:13.507916 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:13.508017 | orchestrator | 2026-03-24 01:48:13.508035 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-24 01:48:14.094742 | orchestrator | ok: [testbed-manager] 2026-03-24 01:48:14.094896 | orchestrator | 2026-03-24 01:48:14.094914 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-24 01:48:14.149280 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:48:14.149377 | orchestrator | 2026-03-24 01:48:14.149393 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-24 01:48:14.198979 | orchestrator | ok: [testbed-manager] 2026-03-24 01:48:14.199086 | orchestrator | 2026-03-24 01:48:14.199105 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-24 01:48:14.991926 | orchestrator | changed: [testbed-manager] 2026-03-24 01:48:14.992040 | orchestrator | 2026-03-24 01:48:14.992064 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-24 01:49:24.268301 | orchestrator | changed: [testbed-manager] 2026-03-24 01:49:24.268422 | orchestrator | 2026-03-24 01:49:24.268440 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-24 01:49:25.278087 | orchestrator | ok: [testbed-manager] 2026-03-24 01:49:25.278187 | orchestrator | 2026-03-24 01:49:25.278202 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-24 01:49:25.332509 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:49:25.332593 | orchestrator | 2026-03-24 01:49:25.332604 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-24 01:49:31.737453 | orchestrator | changed: [testbed-manager] 2026-03-24 01:49:31.737580 | orchestrator | 2026-03-24 01:49:31.737608 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-24 01:49:31.841953 | orchestrator | ok: [testbed-manager] 2026-03-24 01:49:31.842103 | orchestrator | 2026-03-24 01:49:31.842122 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-24 01:49:31.842135 | orchestrator | 2026-03-24 01:49:31.842148 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-24 01:49:31.892960 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:49:31.893060 | orchestrator | 2026-03-24 01:49:31.893075 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-24 01:50:31.960565 | orchestrator | Pausing for 60 seconds 2026-03-24 01:50:31.960682 | orchestrator | changed: [testbed-manager] 2026-03-24 01:50:31.960699 | orchestrator | 2026-03-24 01:50:31.960713 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-24 01:50:35.041802 | orchestrator | changed: [testbed-manager] 2026-03-24 01:50:35.041897 | orchestrator | 2026-03-24 01:50:35.041907 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-24 01:51:37.035143 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-24 01:51:37.035266 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-24 01:51:37.035305 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-24 01:51:37.035319 | orchestrator | changed: [testbed-manager] 2026-03-24 01:51:37.035332 | orchestrator | 2026-03-24 01:51:37.035345 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-24 01:51:47.470497 | orchestrator | changed: [testbed-manager] 2026-03-24 01:51:47.470613 | orchestrator | 2026-03-24 01:51:47.470630 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-24 01:51:47.563734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-24 01:51:47.563827 | orchestrator | 2026-03-24 01:51:47.563842 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-24 01:51:47.563853 | orchestrator | 2026-03-24 01:51:47.563863 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-24 01:51:47.625409 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:51:47.625507 | orchestrator | 2026-03-24 01:51:47.625527 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-24 01:51:47.699868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-24 01:51:47.700013 | orchestrator | 2026-03-24 01:51:47.700034 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-24 01:51:48.488529 | orchestrator | changed: [testbed-manager] 2026-03-24 01:51:48.488683 | orchestrator | 2026-03-24 01:51:48.488704 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-24 01:51:51.776023 | orchestrator | ok: [testbed-manager] 2026-03-24 01:51:51.776142 | orchestrator | 2026-03-24 01:51:51.776166 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-24 01:51:51.852336 | orchestrator | ok: [testbed-manager] => { 2026-03-24 01:51:51.852429 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-24 01:51:51.852443 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-24 01:51:51.852455 | orchestrator | "Checking running containers against expected versions...", 2026-03-24 01:51:51.852466 | orchestrator | "", 2026-03-24 01:51:51.852477 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-24 01:51:51.852488 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-24 01:51:51.852499 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.852510 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-24 01:51:51.852520 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.852530 | orchestrator | "", 2026-03-24 01:51:51.852541 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-24 01:51:51.852576 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-24 01:51:51.852677 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.852688 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-24 01:51:51.852699 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.852709 | orchestrator | "", 2026-03-24 01:51:51.852719 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-24 01:51:51.852730 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-24 01:51:51.852740 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.852750 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-24 01:51:51.852760 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.852770 | orchestrator | "", 2026-03-24 01:51:51.852780 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-24 01:51:51.852791 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-24 01:51:51.852801 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.852811 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-24 01:51:51.852821 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.852831 | orchestrator | "", 2026-03-24 01:51:51.852844 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-24 01:51:51.852854 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-24 01:51:51.852864 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.852874 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-24 01:51:51.852886 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.852898 | orchestrator | "", 2026-03-24 01:51:51.852909 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-24 01:51:51.852920 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.852931 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.852942 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.852953 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.852964 | orchestrator | "", 2026-03-24 01:51:51.853021 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-24 01:51:51.853032 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-24 01:51:51.853043 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853055 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-24 01:51:51.853067 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853078 | orchestrator | "", 2026-03-24 01:51:51.853089 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-24 01:51:51.853101 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-24 01:51:51.853112 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853124 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-24 01:51:51.853135 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853146 | orchestrator | "", 2026-03-24 01:51:51.853158 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-24 01:51:51.853169 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-24 01:51:51.853180 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853191 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-24 01:51:51.853202 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853214 | orchestrator | "", 2026-03-24 01:51:51.853225 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-24 01:51:51.853237 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-24 01:51:51.853248 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853259 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-24 01:51:51.853269 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853279 | orchestrator | "", 2026-03-24 01:51:51.853297 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-24 01:51:51.853335 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853356 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853372 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853390 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853406 | orchestrator | "", 2026-03-24 01:51:51.853424 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-24 01:51:51.853440 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853457 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853474 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853492 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853511 | orchestrator | "", 2026-03-24 01:51:51.853528 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-24 01:51:51.853546 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853558 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853568 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853578 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853588 | orchestrator | "", 2026-03-24 01:51:51.853598 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-24 01:51:51.853608 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853618 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853628 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853659 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853670 | orchestrator | "", 2026-03-24 01:51:51.853680 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-24 01:51:51.853690 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853711 | orchestrator | " Enabled: true", 2026-03-24 01:51:51.853721 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-24 01:51:51.853731 | orchestrator | " Status: ✅ MATCH", 2026-03-24 01:51:51.853741 | orchestrator | "", 2026-03-24 01:51:51.853751 | orchestrator | "=== Summary ===", 2026-03-24 01:51:51.853762 | orchestrator | "Errors (version mismatches): 0", 2026-03-24 01:51:51.853772 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-24 01:51:51.853782 | orchestrator | "", 2026-03-24 01:51:51.853792 | orchestrator | "✅ All running containers match expected versions!" 2026-03-24 01:51:51.853802 | orchestrator | ] 2026-03-24 01:51:51.853813 | orchestrator | } 2026-03-24 01:51:51.853823 | orchestrator | 2026-03-24 01:51:51.853833 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-24 01:51:51.911543 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:51:51.911665 | orchestrator | 2026-03-24 01:51:51.911685 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:51:51.911699 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-24 01:51:51.911711 | orchestrator | 2026-03-24 01:51:52.016725 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 01:51:52.016822 | orchestrator | + deactivate 2026-03-24 01:51:52.016838 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-24 01:51:52.016852 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 01:51:52.016863 | orchestrator | + export PATH 2026-03-24 01:51:52.016880 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-24 01:51:52.016894 | orchestrator | + '[' -n '' ']' 2026-03-24 01:51:52.016904 | orchestrator | + hash -r 2026-03-24 01:51:52.016915 | orchestrator | + '[' -n '' ']' 2026-03-24 01:51:52.016925 | orchestrator | + unset VIRTUAL_ENV 2026-03-24 01:51:52.016936 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-24 01:51:52.016946 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-24 01:51:52.016957 | orchestrator | + unset -f deactivate 2026-03-24 01:51:52.017028 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-24 01:51:52.024422 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-24 01:51:52.024500 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-24 01:51:52.024537 | orchestrator | + local max_attempts=60 2026-03-24 01:51:52.024549 | orchestrator | + local name=ceph-ansible 2026-03-24 01:51:52.024559 | orchestrator | + local attempt_num=1 2026-03-24 01:51:52.025437 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 01:51:52.054127 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 01:51:52.054203 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-24 01:51:52.054211 | orchestrator | + local max_attempts=60 2026-03-24 01:51:52.054217 | orchestrator | + local name=kolla-ansible 2026-03-24 01:51:52.054223 | orchestrator | + local attempt_num=1 2026-03-24 01:51:52.055012 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-24 01:51:52.094580 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 01:51:52.094670 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-24 01:51:52.094685 | orchestrator | + local max_attempts=60 2026-03-24 01:51:52.094697 | orchestrator | + local name=osism-ansible 2026-03-24 01:51:52.094709 | orchestrator | + local attempt_num=1 2026-03-24 01:51:52.095335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-24 01:51:52.123338 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 01:51:52.123435 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-24 01:51:52.123449 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-24 01:51:52.811359 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-24 01:51:52.981429 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-24 01:51:52.981551 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-24 01:51:52.981578 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-24 01:51:52.981599 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-24 01:51:52.981620 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-24 01:51:52.981666 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-24 01:51:52.981686 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-24 01:51:52.981698 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-24 01:51:52.981726 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-24 01:51:52.981739 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-24 01:51:52.981751 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-24 01:51:52.981762 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-24 01:51:52.981774 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-24 01:51:52.981809 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-24 01:51:52.981821 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-24 01:51:52.981834 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-24 01:51:52.989281 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-24 01:51:53.030456 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 01:51:53.030610 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-24 01:51:53.032592 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-24 01:52:05.379569 | orchestrator | 2026-03-24 01:52:05 | INFO  | Task 4c7e2bef-66ac-4bdf-83e6-a1888bfab014 (resolvconf) was prepared for execution. 2026-03-24 01:52:05.379690 | orchestrator | 2026-03-24 01:52:05 | INFO  | It takes a moment until task 4c7e2bef-66ac-4bdf-83e6-a1888bfab014 (resolvconf) has been started and output is visible here. 2026-03-24 01:52:20.375512 | orchestrator | 2026-03-24 01:52:20.375634 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-24 01:52:20.375653 | orchestrator | 2026-03-24 01:52:20.375666 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:52:20.375679 | orchestrator | Tuesday 24 March 2026 01:52:09 +0000 (0:00:00.101) 0:00:00.101 ********* 2026-03-24 01:52:20.375690 | orchestrator | ok: [testbed-manager] 2026-03-24 01:52:20.375703 | orchestrator | 2026-03-24 01:52:20.375715 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-24 01:52:20.375728 | orchestrator | Tuesday 24 March 2026 01:52:12 +0000 (0:00:03.411) 0:00:03.513 ********* 2026-03-24 01:52:20.375740 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:52:20.375752 | orchestrator | 2026-03-24 01:52:20.375764 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-24 01:52:20.375775 | orchestrator | Tuesday 24 March 2026 01:52:12 +0000 (0:00:00.077) 0:00:03.590 ********* 2026-03-24 01:52:20.375787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-24 01:52:20.375800 | orchestrator | 2026-03-24 01:52:20.375811 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-24 01:52:20.375823 | orchestrator | Tuesday 24 March 2026 01:52:12 +0000 (0:00:00.078) 0:00:03.669 ********* 2026-03-24 01:52:20.375852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 01:52:20.375865 | orchestrator | 2026-03-24 01:52:20.375877 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-24 01:52:20.375889 | orchestrator | Tuesday 24 March 2026 01:52:12 +0000 (0:00:00.078) 0:00:03.748 ********* 2026-03-24 01:52:20.375900 | orchestrator | ok: [testbed-manager] 2026-03-24 01:52:20.375912 | orchestrator | 2026-03-24 01:52:20.375923 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-24 01:52:20.375935 | orchestrator | Tuesday 24 March 2026 01:52:13 +0000 (0:00:00.866) 0:00:04.614 ********* 2026-03-24 01:52:20.375946 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:52:20.375958 | orchestrator | 2026-03-24 01:52:20.375969 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-24 01:52:20.375981 | orchestrator | Tuesday 24 March 2026 01:52:13 +0000 (0:00:00.046) 0:00:04.661 ********* 2026-03-24 01:52:20.376076 | orchestrator | ok: [testbed-manager] 2026-03-24 01:52:20.376091 | orchestrator | 2026-03-24 01:52:20.376104 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-24 01:52:20.376116 | orchestrator | Tuesday 24 March 2026 01:52:15 +0000 (0:00:01.494) 0:00:06.155 ********* 2026-03-24 01:52:20.376130 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:52:20.376143 | orchestrator | 2026-03-24 01:52:20.376155 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-24 01:52:20.376169 | orchestrator | Tuesday 24 March 2026 01:52:15 +0000 (0:00:00.073) 0:00:06.229 ********* 2026-03-24 01:52:20.376182 | orchestrator | changed: [testbed-manager] 2026-03-24 01:52:20.376195 | orchestrator | 2026-03-24 01:52:20.376208 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-24 01:52:20.376221 | orchestrator | Tuesday 24 March 2026 01:52:15 +0000 (0:00:00.476) 0:00:06.705 ********* 2026-03-24 01:52:20.376233 | orchestrator | changed: [testbed-manager] 2026-03-24 01:52:20.376246 | orchestrator | 2026-03-24 01:52:20.376259 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-24 01:52:20.376272 | orchestrator | Tuesday 24 March 2026 01:52:16 +0000 (0:00:01.032) 0:00:07.737 ********* 2026-03-24 01:52:20.376285 | orchestrator | ok: [testbed-manager] 2026-03-24 01:52:20.376298 | orchestrator | 2026-03-24 01:52:20.376311 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-24 01:52:20.376323 | orchestrator | Tuesday 24 March 2026 01:52:18 +0000 (0:00:01.961) 0:00:09.699 ********* 2026-03-24 01:52:20.376336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-24 01:52:20.376349 | orchestrator | 2026-03-24 01:52:20.376362 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-24 01:52:20.376375 | orchestrator | Tuesday 24 March 2026 01:52:18 +0000 (0:00:00.100) 0:00:09.799 ********* 2026-03-24 01:52:20.376386 | orchestrator | changed: [testbed-manager] 2026-03-24 01:52:20.376398 | orchestrator | 2026-03-24 01:52:20.376409 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:52:20.376421 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 01:52:20.376432 | orchestrator | 2026-03-24 01:52:20.376444 | orchestrator | 2026-03-24 01:52:20.376455 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 01:52:20.376467 | orchestrator | Tuesday 24 March 2026 01:52:20 +0000 (0:00:01.193) 0:00:10.993 ********* 2026-03-24 01:52:20.376478 | orchestrator | =============================================================================== 2026-03-24 01:52:20.376490 | orchestrator | Gathering Facts --------------------------------------------------------- 3.41s 2026-03-24 01:52:20.376501 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.96s 2026-03-24 01:52:20.376513 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 1.49s 2026-03-24 01:52:20.376524 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-03-24 01:52:20.376536 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2026-03-24 01:52:20.376547 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.87s 2026-03-24 01:52:20.376577 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2026-03-24 01:52:20.376590 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2026-03-24 01:52:20.376601 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-24 01:52:20.376613 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-24 01:52:20.376624 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-03-24 01:52:20.376636 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-03-24 01:52:20.376655 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-03-24 01:52:20.686693 | orchestrator | + osism apply sshconfig 2026-03-24 01:52:32.690514 | orchestrator | 2026-03-24 01:52:32 | INFO  | Task 492dde71-df77-4da2-bea0-caccbdab55c0 (sshconfig) was prepared for execution. 2026-03-24 01:52:32.690659 | orchestrator | 2026-03-24 01:52:32 | INFO  | It takes a moment until task 492dde71-df77-4da2-bea0-caccbdab55c0 (sshconfig) has been started and output is visible here. 2026-03-24 01:52:44.662518 | orchestrator | 2026-03-24 01:52:44.662666 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-24 01:52:44.662683 | orchestrator | 2026-03-24 01:52:44.662721 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-24 01:52:44.662733 | orchestrator | Tuesday 24 March 2026 01:52:36 +0000 (0:00:00.167) 0:00:00.167 ********* 2026-03-24 01:52:44.662744 | orchestrator | ok: [testbed-manager] 2026-03-24 01:52:44.662755 | orchestrator | 2026-03-24 01:52:44.662766 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-24 01:52:44.662776 | orchestrator | Tuesday 24 March 2026 01:52:37 +0000 (0:00:00.566) 0:00:00.734 ********* 2026-03-24 01:52:44.662787 | orchestrator | changed: [testbed-manager] 2026-03-24 01:52:44.662799 | orchestrator | 2026-03-24 01:52:44.662809 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-24 01:52:44.662819 | orchestrator | Tuesday 24 March 2026 01:52:37 +0000 (0:00:00.519) 0:00:01.253 ********* 2026-03-24 01:52:44.662830 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-24 01:52:44.662841 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-24 01:52:44.662851 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-24 01:52:44.662862 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-24 01:52:44.662872 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-24 01:52:44.662882 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-24 01:52:44.662893 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-24 01:52:44.662903 | orchestrator | 2026-03-24 01:52:44.662913 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-24 01:52:44.662923 | orchestrator | Tuesday 24 March 2026 01:52:43 +0000 (0:00:05.816) 0:00:07.070 ********* 2026-03-24 01:52:44.662934 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:52:44.662944 | orchestrator | 2026-03-24 01:52:44.662954 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-24 01:52:44.662964 | orchestrator | Tuesday 24 March 2026 01:52:43 +0000 (0:00:00.077) 0:00:07.148 ********* 2026-03-24 01:52:44.662974 | orchestrator | changed: [testbed-manager] 2026-03-24 01:52:44.662985 | orchestrator | 2026-03-24 01:52:44.662995 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:52:44.663035 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 01:52:44.663055 | orchestrator | 2026-03-24 01:52:44.663071 | orchestrator | 2026-03-24 01:52:44.663088 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 01:52:44.663104 | orchestrator | Tuesday 24 March 2026 01:52:44 +0000 (0:00:00.539) 0:00:07.688 ********* 2026-03-24 01:52:44.663118 | orchestrator | =============================================================================== 2026-03-24 01:52:44.663135 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.82s 2026-03-24 01:52:44.663153 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2026-03-24 01:52:44.663170 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-03-24 01:52:44.663187 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-03-24 01:52:44.663244 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-24 01:52:44.961596 | orchestrator | + osism apply known-hosts 2026-03-24 01:52:57.004943 | orchestrator | 2026-03-24 01:52:57 | INFO  | Task e65f0177-da98-428b-898f-47868553fa8c (known-hosts) was prepared for execution. 2026-03-24 01:52:57.005104 | orchestrator | 2026-03-24 01:52:57 | INFO  | It takes a moment until task e65f0177-da98-428b-898f-47868553fa8c (known-hosts) has been started and output is visible here. 2026-03-24 01:53:13.367715 | orchestrator | 2026-03-24 01:53:13.367808 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-24 01:53:13.367821 | orchestrator | 2026-03-24 01:53:13.367831 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-24 01:53:13.367841 | orchestrator | Tuesday 24 March 2026 01:53:00 +0000 (0:00:00.169) 0:00:00.169 ********* 2026-03-24 01:53:13.367851 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-24 01:53:13.367862 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-24 01:53:13.367877 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-24 01:53:13.367886 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-24 01:53:13.367894 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-24 01:53:13.367903 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-24 01:53:13.367911 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-24 01:53:13.367920 | orchestrator | 2026-03-24 01:53:13.367928 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-24 01:53:13.367938 | orchestrator | Tuesday 24 March 2026 01:53:06 +0000 (0:00:06.037) 0:00:06.207 ********* 2026-03-24 01:53:13.367948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-24 01:53:13.367958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-24 01:53:13.367967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-24 01:53:13.367975 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-24 01:53:13.367984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-24 01:53:13.368000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-24 01:53:13.368009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-24 01:53:13.368017 | orchestrator | 2026-03-24 01:53:13.368026 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:13.368060 | orchestrator | Tuesday 24 March 2026 01:53:07 +0000 (0:00:00.161) 0:00:06.368 ********* 2026-03-24 01:53:13.368071 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJPTKbMQ5zDWrA1wu2vHTYgctiiWS/ckg9Joh4ZvZabY) 2026-03-24 01:53:13.368087 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC144CtivRbFHr7pk4XbIUZtJlRg9SQts2lTkKly8UFEu2WlKyxn5a64dCG37T4+mOMOZbzVRP7CX3Lld2Q/3WfRaAwqUIfv2iqZqv2dkZObn9KnvsLkCnQgNoqb+2aFRXDN5bU5R1z8/pqLPRXacpA9oywWPWawD+FTV7Kamxoyiw/L5hpHvncZBKHOKVcxD9rknsPSWARF2uyyUhs2dsMVeu3YnHlhZ1tCW4fVwA+pTD2vV4RpIq+yrykpy9NMEgRYXIzewEDYYYbk28jgbAgkxd/xIdrLHknnUwr7qY58aYI78CfOmqgvHjr4p2J+2wTEHy6l0/CANe1V3fu84oBm0BgBTDHAVfJ2yzF7QBF8/OgbHSrFP4KpLqzXg536UB9FpSxpOuCDu650fmD4lp24aM2ifJ4YNrZ+cfowORjuMoGpLmJ8YYYSKfSmxhlfwy1ICcxPQYReAbRW9QUEIq3zgO1qr8dcScbgbzJWppiGaHxxvVZSJIgd132uct0UPs=) 2026-03-24 01:53:13.368117 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH5wuPOnnYsDTkhPZFADb9lDgUxN9TReZ6tTode5/qpRZuJYxkUC2Vr40vojXr24o4GvYfwd6WDa/2m2x2M0VkM=) 2026-03-24 01:53:13.368133 | orchestrator | 2026-03-24 01:53:13.368146 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:13.368159 | orchestrator | Tuesday 24 March 2026 01:53:08 +0000 (0:00:01.127) 0:00:07.496 ********* 2026-03-24 01:53:13.368183 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCLIw9mnYDfzFOqn9FhMgS/zDl0mE4c3Ap+K+/MmI5X1dZVUf+8UB0mXCbQ5GxTzEMSZ6tmogYOslV17eAeSC5IUqgLshu+jL4J7bjTaSq03JJ7ND+G2EdsDXOSAaNoGHkgYcV5+XFjrFKdrMb94HYtqnwK4lWow+OHF0tbj1CxviYpIKRT8GzuQ95dXmWK4QhYciI4ttFj+3MssOmG8FS9IYWbdkldyj/Hxf4a7ajRb7/XkzUAkKGFoXUrNKYRcHrQ1vligzvuRVo6vno8PiHD5DKfl2sJ9yamRL78XDQjMY70pbhCXQHNVCd8Px70sY+X03cpmDjYk6mB4xT54HsQYGEgn239YT4AZYG8WwGwxQPPAp2VB4Zb6G2pVlxKMmTiiHAgNUVqAfbBO1aVJuu+xgLsJDqrfYWe0uIMoWnoAqn0QctRA0KKVg6+urtIyIU2UXUZPsBAzoafwTI9+i8K3kslc/5RQmvxADPtWwcg9dd7xepXCyGFWoblgNJsY+s=) 2026-03-24 01:53:13.368193 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJDOA/giLOPVR/26yizlZw3KTpKSeALu8EdLtaeHGXaw5uFC58GW9QZp+rswvVkpLajNHt3aWAFixPUHAkZ4zNw=) 2026-03-24 01:53:13.368203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO9GrAZk2bZuM2/thmJUT/nC9t0/FxEHv7tUPp2WahLu) 2026-03-24 01:53:13.368216 | orchestrator | 2026-03-24 01:53:13.368230 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:13.368243 | orchestrator | Tuesday 24 March 2026 01:53:09 +0000 (0:00:00.998) 0:00:08.494 ********* 2026-03-24 01:53:13.368257 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBtZUc8QKTK5iCLgElrgppVYLgrhWdi4qLsGDYJw/+79ClWharqBUDIhuzhbjWM8R2hN86qG0nDolBTh4xnR+wI=) 2026-03-24 01:53:13.368271 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIQO1i9g/1tJR/1+6kg9EUigfHtItW43LPZpB6j7xZVV) 2026-03-24 01:53:13.368287 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7eAKp6a0skNX50uLIVg5pKiZklrcbc/i0to1LVCzpqfoJC/oWvrkdHa+7uKSwiLWte8WyJ67pnCfLp6tgZKKzMPNMBa3qUHvWCFF83l5tR0lVc3foY7Czhy6wXzkoNu4GzalWFaugoQr/yF0d0Zn16vtWQ4nQfmBT7QONFnT2PZ2ZNV/1OnY5MTaG60eklW9WnaVMax3yy2xQO+oewXlBA4SFjIo412iOg9VQtSni8wYcKwYbgxqXZ1oaScLTlpl0TcrPDsrNVcRG/xvWy0UnlvpMlzh/t/KztjXFPSFKMZd0qwm8ApiTZxrJkxLqHXgBjTyHfVJtwSOGFz8dJuaC8uZWDDM59DTFKPSLE+oXRduB613o5iLWLm+3Opyuzmg0R8XrwKWd4oJSjcC+CvXDSygn8V78wAyN+BD4hPvcLmO+ji6LKeFahfW2UxLzYLGDdBs+Fsz+OPQUZ4Of5U5tH2g7RSqdrjeh7y2LazUbXEiTcCDVYWEznJI1ST+vYVE=) 2026-03-24 01:53:13.368297 | orchestrator | 2026-03-24 01:53:13.368306 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:13.368314 | orchestrator | Tuesday 24 March 2026 01:53:10 +0000 (0:00:01.024) 0:00:09.518 ********* 2026-03-24 01:53:13.368323 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwd7wff0AG/hlIoZ7WabzWriXIYwipZovdSBVvtEg9vHpOK4sGqx8cb6Q6G+i/jrO2uer8oH6kvoMRa5tpHESuvRGAuqtC1UONT7poEHKgbvG9mjUjmGKjcg00vWVsBEPHZtbiOG8ZLYraYlkBR4ePSWQnvchNszJZ30uUyJrtv0MAvlgpSECcLxMV0CX+nv4uD9ucCDm9m+e3MMyEPzILJBsyo/WW2OmsGVHAXt02B2q3ipE9ne32ax0YASL6LxTmK4jwxpLpwH1ZI6wflSXK586T9X8z18fXyLguyrlqqRRShyst3ceAydsj33ZWpUS1bxUV+LEdQlVx2wiVfwcv2rkNsc9/iM521blHHbFdWD5WvyoGkUjqfm2spJmde4k2Py23jy7f2DnPCtnIjg/RbC1LBvuZiFBl7hR1HAEdpXDVu3xjEOXvYrXlCD7F7MuPoFdd1XakqovOQ1gIjy7wIxio0P6Xlg8CtY9VFDWsOSI6+TCsosjDQ1K4DhbG6IM=) 2026-03-24 01:53:13.368338 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpq3BgpDoatjxONDtZ4q9hTy9O3SMVg3E2lBXWu+Uqa) 2026-03-24 01:53:13.368347 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFNVeFZTpABud5ZiuJ8VqUMSYGklgJ3gIrY+LVFVrIQPeX2ndBv7jcXnFD9wZdztcxcrvwaFThxpkxaIi787pLo=) 2026-03-24 01:53:13.368355 | orchestrator | 2026-03-24 01:53:13.368363 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:13.368374 | orchestrator | Tuesday 24 March 2026 01:53:11 +0000 (0:00:00.964) 0:00:10.483 ********* 2026-03-24 01:53:13.368451 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnXwOCdtF4YK8nLQoK+PzY6ZpgfWoPyvTI/X0zzRph/z5em6bB+xRFwKvee6Ush/eTnuz4Ed6RuIrcdTX4S9slRHIHgBVV+yJuMpF0RXaAgaapktvr+6TDdkKYE4PXlkR0H0SZHKrzkmFH4tZWJXwNTv8jc83TDIhXAslVkH2bE9YwDmu9BIdsMiIJwyqH2jQCXHlmpgJEgRyOqSD3xWZ8KvLsP30YhfrL3Q5BFhm5DpW/k/3SCNl3tNiIOBukt+N60enb4ohP2zFgWkhnSXlTDCaLIVXFS547mwTTRqQqSFcWwgxHAdC7TTt7stHAVatVUtWkMiF/xSrRvRzPyX+PPtkW7jikJOOEz4szIM87gT/QdUKUxnQM2MohlotEJ2+PRKSWvJoPgiznpD8IooAzQrKhRHjuoclyHWt0y//t0RPgiuYeM9UQgqfoXOmJUMEKnTwr0uuqeCFUymYnmtIrTxjiZqfWpMQ4rmoOg0UY/6MNWvPbwL4IMIkRm2XWge0=) 2026-03-24 01:53:13.368461 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIFl5lC5dDkHV1w8nPJaCKeA6pzkaqGZZp+bLvvPi/uxK321lpGBelgUSqdad/7bmCGfhaBnMyxvVPjhG77UiQ8=) 2026-03-24 01:53:13.368470 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEC1R86RcgEvFQXrIVa+Os5dniQKB7ylducND62Ip6pL) 2026-03-24 01:53:13.368479 | orchestrator | 2026-03-24 01:53:13.368487 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:13.368495 | orchestrator | Tuesday 24 March 2026 01:53:12 +0000 (0:00:01.035) 0:00:11.518 ********* 2026-03-24 01:53:13.368512 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMy4C9bxY484+NsGdm72vJ4gutYuzVT2sVmy8kNDUV5GEK7Unj5RKGqtuUH76aplhwbpINP1MnnU0kdEfF9yHKGHb2WTNj6vyeLenWtiPv/jb6EPtqdplrbb6DYKg5ZcJ1gBwU9L8sTtF2Dst+JsRKT+Ah6aT1dA9+L7nk0n7hoJB4wPmqu/Mw+uozpz+dYVP9RbwzRZlclQQ1WRUnXVW8wQKpyMedh6wh1ZldUdj80YqrjianMTZFiTzF2jrFhg5pNLk8Rvp0tlO/Ek6my9xwERenq5SnVfdru1w4++OVTPgZq9k9P8E9fqv9EuQTZ7UTaxCvfRvMkOuYJtIKGmjDHqQMNqhh/ZaqyhadqpVRDtMflFl3uSTqsa5Yl2eN0o53M6pumuBXJHWbF45GIiGLlkbPiYH3nP6cEHJt+cQl+PaiS3cFF6gR7Apm3UGEDlHpHBkIG6Fsub4tLpGJxebTXlWPxXfM3giHkdUOgw0yZjvmccnBOuWE/mDhpwYpLM0=) 2026-03-24 01:53:23.986424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4t34mTVLKZZtdT6z4mBbhihMt+tQvlPHKpTfVHiyevKGYsQdZ79s8QADUJTUq+4RWiXXZIhhZib4I0p3tWlEQ=) 2026-03-24 01:53:23.986603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtA6mphrNJsbtEZHU/RjlOtdMPbqgrev63Tn+85ikHh) 2026-03-24 01:53:23.986636 | orchestrator | 2026-03-24 01:53:23.986658 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:23.986680 | orchestrator | Tuesday 24 March 2026 01:53:13 +0000 (0:00:01.059) 0:00:12.578 ********* 2026-03-24 01:53:23.986702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIDlpOPHJt47q9ePDFOQw+ULKRJnWXdYjOmUOWCkoE4ZTWQAAlr8u/vAxSNGvY71yihfDVI6aOWmSknEGUmJGAC1aK+7AYwfjC58fPSySxqqaHgyz9hDYOEeLIaYs9YDjoNppWuUcwJSXg15QutUnD6RORqymGuAHKjx/aFNbIsVRiFds/raqC19RckS2XwpQwUOGHzXk0f5xK+2bk7SWH1NmbD9IWOxGKm1z2sGUVZiiUXOF4ywrCCMx5lVi429/kvgurNjAcY82RRAaiEmj2LDLW0eC+YsT7f3pfkCv6C9xIStfPRh9ygRgTR+iKimeEEgk1cvJdj7i5rXq4xzei4Q8S5/62bhLiGSYPCgpbR/R/IlGgLU76sVnHQy4SIM2qvPxWQ36f047zImhthbeVyByCcfg50G+YGNMvbpZbADl5CCNmhpU3qzWxJsygsxZ2RtPHdLPXtOaKCgQEbMoudfn+7fDKZ1Tla2LdnhcaG+wOIu1KbPMfGkQViH5QyLs=) 2026-03-24 01:53:23.986719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8RuYvgRiVaGcsvcOBKxn6/JBa20OxbrfG0BfxvzeX3cClC/uXUs2E1L30T6qja/6Ga1MqQVvxBaQIwGsg673g=) 2026-03-24 01:53:23.986755 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDfe+Xf3oczD3s4EypVSDUP5F0kiheiy2hpzknYtCT1S) 2026-03-24 01:53:23.986767 | orchestrator | 2026-03-24 01:53:23.986785 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-24 01:53:23.986805 | orchestrator | Tuesday 24 March 2026 01:53:14 +0000 (0:00:01.003) 0:00:13.582 ********* 2026-03-24 01:53:23.986822 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-24 01:53:23.986839 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-24 01:53:23.986855 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-24 01:53:23.986873 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-24 01:53:23.986893 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-24 01:53:23.986913 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-24 01:53:23.986933 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-24 01:53:23.986952 | orchestrator | 2026-03-24 01:53:23.986968 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-24 01:53:23.986982 | orchestrator | Tuesday 24 March 2026 01:53:19 +0000 (0:00:05.410) 0:00:18.992 ********* 2026-03-24 01:53:23.986996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-24 01:53:23.987011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-24 01:53:23.987024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-24 01:53:23.987036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-24 01:53:23.987092 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-24 01:53:23.987105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-24 01:53:23.987116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-24 01:53:23.987128 | orchestrator | 2026-03-24 01:53:23.987139 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:23.987151 | orchestrator | Tuesday 24 March 2026 01:53:19 +0000 (0:00:00.160) 0:00:19.153 ********* 2026-03-24 01:53:23.987163 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJPTKbMQ5zDWrA1wu2vHTYgctiiWS/ckg9Joh4ZvZabY) 2026-03-24 01:53:23.987206 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC144CtivRbFHr7pk4XbIUZtJlRg9SQts2lTkKly8UFEu2WlKyxn5a64dCG37T4+mOMOZbzVRP7CX3Lld2Q/3WfRaAwqUIfv2iqZqv2dkZObn9KnvsLkCnQgNoqb+2aFRXDN5bU5R1z8/pqLPRXacpA9oywWPWawD+FTV7Kamxoyiw/L5hpHvncZBKHOKVcxD9rknsPSWARF2uyyUhs2dsMVeu3YnHlhZ1tCW4fVwA+pTD2vV4RpIq+yrykpy9NMEgRYXIzewEDYYYbk28jgbAgkxd/xIdrLHknnUwr7qY58aYI78CfOmqgvHjr4p2J+2wTEHy6l0/CANe1V3fu84oBm0BgBTDHAVfJ2yzF7QBF8/OgbHSrFP4KpLqzXg536UB9FpSxpOuCDu650fmD4lp24aM2ifJ4YNrZ+cfowORjuMoGpLmJ8YYYSKfSmxhlfwy1ICcxPQYReAbRW9QUEIq3zgO1qr8dcScbgbzJWppiGaHxxvVZSJIgd132uct0UPs=) 2026-03-24 01:53:23.987229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH5wuPOnnYsDTkhPZFADb9lDgUxN9TReZ6tTode5/qpRZuJYxkUC2Vr40vojXr24o4GvYfwd6WDa/2m2x2M0VkM=) 2026-03-24 01:53:23.987252 | orchestrator | 2026-03-24 01:53:23.987264 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:23.987276 | orchestrator | Tuesday 24 March 2026 01:53:20 +0000 (0:00:00.978) 0:00:20.131 ********* 2026-03-24 01:53:23.987288 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJDOA/giLOPVR/26yizlZw3KTpKSeALu8EdLtaeHGXaw5uFC58GW9QZp+rswvVkpLajNHt3aWAFixPUHAkZ4zNw=) 2026-03-24 01:53:23.987300 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCLIw9mnYDfzFOqn9FhMgS/zDl0mE4c3Ap+K+/MmI5X1dZVUf+8UB0mXCbQ5GxTzEMSZ6tmogYOslV17eAeSC5IUqgLshu+jL4J7bjTaSq03JJ7ND+G2EdsDXOSAaNoGHkgYcV5+XFjrFKdrMb94HYtqnwK4lWow+OHF0tbj1CxviYpIKRT8GzuQ95dXmWK4QhYciI4ttFj+3MssOmG8FS9IYWbdkldyj/Hxf4a7ajRb7/XkzUAkKGFoXUrNKYRcHrQ1vligzvuRVo6vno8PiHD5DKfl2sJ9yamRL78XDQjMY70pbhCXQHNVCd8Px70sY+X03cpmDjYk6mB4xT54HsQYGEgn239YT4AZYG8WwGwxQPPAp2VB4Zb6G2pVlxKMmTiiHAgNUVqAfbBO1aVJuu+xgLsJDqrfYWe0uIMoWnoAqn0QctRA0KKVg6+urtIyIU2UXUZPsBAzoafwTI9+i8K3kslc/5RQmvxADPtWwcg9dd7xepXCyGFWoblgNJsY+s=) 2026-03-24 01:53:23.987313 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO9GrAZk2bZuM2/thmJUT/nC9t0/FxEHv7tUPp2WahLu) 2026-03-24 01:53:23.987324 | orchestrator | 2026-03-24 01:53:23.987336 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:23.987347 | orchestrator | Tuesday 24 March 2026 01:53:21 +0000 (0:00:00.980) 0:00:21.112 ********* 2026-03-24 01:53:23.987359 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBtZUc8QKTK5iCLgElrgppVYLgrhWdi4qLsGDYJw/+79ClWharqBUDIhuzhbjWM8R2hN86qG0nDolBTh4xnR+wI=) 2026-03-24 01:53:23.987371 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7eAKp6a0skNX50uLIVg5pKiZklrcbc/i0to1LVCzpqfoJC/oWvrkdHa+7uKSwiLWte8WyJ67pnCfLp6tgZKKzMPNMBa3qUHvWCFF83l5tR0lVc3foY7Czhy6wXzkoNu4GzalWFaugoQr/yF0d0Zn16vtWQ4nQfmBT7QONFnT2PZ2ZNV/1OnY5MTaG60eklW9WnaVMax3yy2xQO+oewXlBA4SFjIo412iOg9VQtSni8wYcKwYbgxqXZ1oaScLTlpl0TcrPDsrNVcRG/xvWy0UnlvpMlzh/t/KztjXFPSFKMZd0qwm8ApiTZxrJkxLqHXgBjTyHfVJtwSOGFz8dJuaC8uZWDDM59DTFKPSLE+oXRduB613o5iLWLm+3Opyuzmg0R8XrwKWd4oJSjcC+CvXDSygn8V78wAyN+BD4hPvcLmO+ji6LKeFahfW2UxLzYLGDdBs+Fsz+OPQUZ4Of5U5tH2g7RSqdrjeh7y2LazUbXEiTcCDVYWEznJI1ST+vYVE=) 2026-03-24 01:53:23.987383 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIQO1i9g/1tJR/1+6kg9EUigfHtItW43LPZpB6j7xZVV) 2026-03-24 01:53:23.987395 | orchestrator | 2026-03-24 01:53:23.987406 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:23.987418 | orchestrator | Tuesday 24 March 2026 01:53:22 +0000 (0:00:01.026) 0:00:22.139 ********* 2026-03-24 01:53:23.987429 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwd7wff0AG/hlIoZ7WabzWriXIYwipZovdSBVvtEg9vHpOK4sGqx8cb6Q6G+i/jrO2uer8oH6kvoMRa5tpHESuvRGAuqtC1UONT7poEHKgbvG9mjUjmGKjcg00vWVsBEPHZtbiOG8ZLYraYlkBR4ePSWQnvchNszJZ30uUyJrtv0MAvlgpSECcLxMV0CX+nv4uD9ucCDm9m+e3MMyEPzILJBsyo/WW2OmsGVHAXt02B2q3ipE9ne32ax0YASL6LxTmK4jwxpLpwH1ZI6wflSXK586T9X8z18fXyLguyrlqqRRShyst3ceAydsj33ZWpUS1bxUV+LEdQlVx2wiVfwcv2rkNsc9/iM521blHHbFdWD5WvyoGkUjqfm2spJmde4k2Py23jy7f2DnPCtnIjg/RbC1LBvuZiFBl7hR1HAEdpXDVu3xjEOXvYrXlCD7F7MuPoFdd1XakqovOQ1gIjy7wIxio0P6Xlg8CtY9VFDWsOSI6+TCsosjDQ1K4DhbG6IM=) 2026-03-24 01:53:23.987441 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFNVeFZTpABud5ZiuJ8VqUMSYGklgJ3gIrY+LVFVrIQPeX2ndBv7jcXnFD9wZdztcxcrvwaFThxpkxaIi787pLo=) 2026-03-24 01:53:23.987465 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpq3BgpDoatjxONDtZ4q9hTy9O3SMVg3E2lBXWu+Uqa) 2026-03-24 01:53:28.296630 | orchestrator | 2026-03-24 01:53:28.296725 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:28.296740 | orchestrator | Tuesday 24 March 2026 01:53:23 +0000 (0:00:01.056) 0:00:23.195 ********* 2026-03-24 01:53:28.296753 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIFl5lC5dDkHV1w8nPJaCKeA6pzkaqGZZp+bLvvPi/uxK321lpGBelgUSqdad/7bmCGfhaBnMyxvVPjhG77UiQ8=) 2026-03-24 01:53:28.296768 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnXwOCdtF4YK8nLQoK+PzY6ZpgfWoPyvTI/X0zzRph/z5em6bB+xRFwKvee6Ush/eTnuz4Ed6RuIrcdTX4S9slRHIHgBVV+yJuMpF0RXaAgaapktvr+6TDdkKYE4PXlkR0H0SZHKrzkmFH4tZWJXwNTv8jc83TDIhXAslVkH2bE9YwDmu9BIdsMiIJwyqH2jQCXHlmpgJEgRyOqSD3xWZ8KvLsP30YhfrL3Q5BFhm5DpW/k/3SCNl3tNiIOBukt+N60enb4ohP2zFgWkhnSXlTDCaLIVXFS547mwTTRqQqSFcWwgxHAdC7TTt7stHAVatVUtWkMiF/xSrRvRzPyX+PPtkW7jikJOOEz4szIM87gT/QdUKUxnQM2MohlotEJ2+PRKSWvJoPgiznpD8IooAzQrKhRHjuoclyHWt0y//t0RPgiuYeM9UQgqfoXOmJUMEKnTwr0uuqeCFUymYnmtIrTxjiZqfWpMQ4rmoOg0UY/6MNWvPbwL4IMIkRm2XWge0=) 2026-03-24 01:53:28.296782 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEC1R86RcgEvFQXrIVa+Os5dniQKB7ylducND62Ip6pL) 2026-03-24 01:53:28.296793 | orchestrator | 2026-03-24 01:53:28.296804 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:28.296814 | orchestrator | Tuesday 24 March 2026 01:53:25 +0000 (0:00:01.055) 0:00:24.251 ********* 2026-03-24 01:53:28.296825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCMy4C9bxY484+NsGdm72vJ4gutYuzVT2sVmy8kNDUV5GEK7Unj5RKGqtuUH76aplhwbpINP1MnnU0kdEfF9yHKGHb2WTNj6vyeLenWtiPv/jb6EPtqdplrbb6DYKg5ZcJ1gBwU9L8sTtF2Dst+JsRKT+Ah6aT1dA9+L7nk0n7hoJB4wPmqu/Mw+uozpz+dYVP9RbwzRZlclQQ1WRUnXVW8wQKpyMedh6wh1ZldUdj80YqrjianMTZFiTzF2jrFhg5pNLk8Rvp0tlO/Ek6my9xwERenq5SnVfdru1w4++OVTPgZq9k9P8E9fqv9EuQTZ7UTaxCvfRvMkOuYJtIKGmjDHqQMNqhh/ZaqyhadqpVRDtMflFl3uSTqsa5Yl2eN0o53M6pumuBXJHWbF45GIiGLlkbPiYH3nP6cEHJt+cQl+PaiS3cFF6gR7Apm3UGEDlHpHBkIG6Fsub4tLpGJxebTXlWPxXfM3giHkdUOgw0yZjvmccnBOuWE/mDhpwYpLM0=) 2026-03-24 01:53:28.296836 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4t34mTVLKZZtdT6z4mBbhihMt+tQvlPHKpTfVHiyevKGYsQdZ79s8QADUJTUq+4RWiXXZIhhZib4I0p3tWlEQ=) 2026-03-24 01:53:28.296847 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDtA6mphrNJsbtEZHU/RjlOtdMPbqgrev63Tn+85ikHh) 2026-03-24 01:53:28.296857 | orchestrator | 2026-03-24 01:53:28.296867 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-24 01:53:28.296877 | orchestrator | Tuesday 24 March 2026 01:53:26 +0000 (0:00:01.037) 0:00:25.289 ********* 2026-03-24 01:53:28.296888 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8RuYvgRiVaGcsvcOBKxn6/JBa20OxbrfG0BfxvzeX3cClC/uXUs2E1L30T6qja/6Ga1MqQVvxBaQIwGsg673g=) 2026-03-24 01:53:28.296917 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIDlpOPHJt47q9ePDFOQw+ULKRJnWXdYjOmUOWCkoE4ZTWQAAlr8u/vAxSNGvY71yihfDVI6aOWmSknEGUmJGAC1aK+7AYwfjC58fPSySxqqaHgyz9hDYOEeLIaYs9YDjoNppWuUcwJSXg15QutUnD6RORqymGuAHKjx/aFNbIsVRiFds/raqC19RckS2XwpQwUOGHzXk0f5xK+2bk7SWH1NmbD9IWOxGKm1z2sGUVZiiUXOF4ywrCCMx5lVi429/kvgurNjAcY82RRAaiEmj2LDLW0eC+YsT7f3pfkCv6C9xIStfPRh9ygRgTR+iKimeEEgk1cvJdj7i5rXq4xzei4Q8S5/62bhLiGSYPCgpbR/R/IlGgLU76sVnHQy4SIM2qvPxWQ36f047zImhthbeVyByCcfg50G+YGNMvbpZbADl5CCNmhpU3qzWxJsygsxZ2RtPHdLPXtOaKCgQEbMoudfn+7fDKZ1Tla2LdnhcaG+wOIu1KbPMfGkQViH5QyLs=) 2026-03-24 01:53:28.296928 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDfe+Xf3oczD3s4EypVSDUP5F0kiheiy2hpzknYtCT1S) 2026-03-24 01:53:28.296939 | orchestrator | 2026-03-24 01:53:28.296949 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-24 01:53:28.296981 | orchestrator | Tuesday 24 March 2026 01:53:27 +0000 (0:00:01.057) 0:00:26.347 ********* 2026-03-24 01:53:28.296992 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-24 01:53:28.297003 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-24 01:53:28.297013 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-24 01:53:28.297023 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-24 01:53:28.297033 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-24 01:53:28.297043 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-24 01:53:28.297132 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-24 01:53:28.297143 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:53:28.297154 | orchestrator | 2026-03-24 01:53:28.297181 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-24 01:53:28.297194 | orchestrator | Tuesday 24 March 2026 01:53:27 +0000 (0:00:00.157) 0:00:26.504 ********* 2026-03-24 01:53:28.297205 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:53:28.297217 | orchestrator | 2026-03-24 01:53:28.297228 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-24 01:53:28.297245 | orchestrator | Tuesday 24 March 2026 01:53:27 +0000 (0:00:00.064) 0:00:26.568 ********* 2026-03-24 01:53:28.297257 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:53:28.297268 | orchestrator | 2026-03-24 01:53:28.297280 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-24 01:53:28.297291 | orchestrator | Tuesday 24 March 2026 01:53:27 +0000 (0:00:00.055) 0:00:26.624 ********* 2026-03-24 01:53:28.297303 | orchestrator | changed: [testbed-manager] 2026-03-24 01:53:28.297315 | orchestrator | 2026-03-24 01:53:28.297326 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:53:28.297337 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 01:53:28.297350 | orchestrator | 2026-03-24 01:53:28.297361 | orchestrator | 2026-03-24 01:53:28.297372 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 01:53:28.297384 | orchestrator | Tuesday 24 March 2026 01:53:28 +0000 (0:00:00.701) 0:00:27.326 ********* 2026-03-24 01:53:28.297395 | orchestrator | =============================================================================== 2026-03-24 01:53:28.297406 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.04s 2026-03-24 01:53:28.297417 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.41s 2026-03-24 01:53:28.297429 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-24 01:53:28.297440 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-24 01:53:28.297451 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-24 01:53:28.297462 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-24 01:53:28.297473 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-03-24 01:53:28.297484 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-24 01:53:28.297509 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-24 01:53:28.297521 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-03-24 01:53:28.297532 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-24 01:53:28.297542 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-24 01:53:28.297552 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-24 01:53:28.297562 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-24 01:53:28.297580 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-03-24 01:53:28.297591 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-03-24 01:53:28.297601 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2026-03-24 01:53:28.297611 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-03-24 01:53:28.297621 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-03-24 01:53:28.297632 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-24 01:53:28.564294 | orchestrator | + osism apply squid 2026-03-24 01:53:40.565400 | orchestrator | 2026-03-24 01:53:40 | INFO  | Task 59a00fec-c553-43e7-8165-27ad4b73f0be (squid) was prepared for execution. 2026-03-24 01:53:40.565524 | orchestrator | 2026-03-24 01:53:40 | INFO  | It takes a moment until task 59a00fec-c553-43e7-8165-27ad4b73f0be (squid) has been started and output is visible here. 2026-03-24 01:55:46.264747 | orchestrator | 2026-03-24 01:55:46.264871 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-24 01:55:46.264888 | orchestrator | 2026-03-24 01:55:46.264901 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-24 01:55:46.264913 | orchestrator | Tuesday 24 March 2026 01:53:44 +0000 (0:00:00.165) 0:00:00.165 ********* 2026-03-24 01:55:46.264926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 01:55:46.264938 | orchestrator | 2026-03-24 01:55:46.264950 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-24 01:55:46.264962 | orchestrator | Tuesday 24 March 2026 01:53:44 +0000 (0:00:00.095) 0:00:00.261 ********* 2026-03-24 01:55:46.264974 | orchestrator | ok: [testbed-manager] 2026-03-24 01:55:46.264994 | orchestrator | 2026-03-24 01:55:46.265012 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-24 01:55:46.265030 | orchestrator | Tuesday 24 March 2026 01:53:45 +0000 (0:00:01.263) 0:00:01.525 ********* 2026-03-24 01:55:46.265049 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-24 01:55:46.265066 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-24 01:55:46.265084 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-24 01:55:46.265102 | orchestrator | 2026-03-24 01:55:46.265122 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-24 01:55:46.265209 | orchestrator | Tuesday 24 March 2026 01:53:46 +0000 (0:00:01.096) 0:00:02.621 ********* 2026-03-24 01:55:46.265225 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-24 01:55:46.265237 | orchestrator | 2026-03-24 01:55:46.265249 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-24 01:55:46.265260 | orchestrator | Tuesday 24 March 2026 01:53:47 +0000 (0:00:01.047) 0:00:03.669 ********* 2026-03-24 01:55:46.265272 | orchestrator | ok: [testbed-manager] 2026-03-24 01:55:46.265285 | orchestrator | 2026-03-24 01:55:46.265298 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-24 01:55:46.265311 | orchestrator | Tuesday 24 March 2026 01:53:48 +0000 (0:00:00.346) 0:00:04.015 ********* 2026-03-24 01:55:46.265325 | orchestrator | changed: [testbed-manager] 2026-03-24 01:55:46.265339 | orchestrator | 2026-03-24 01:55:46.265351 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-24 01:55:46.265364 | orchestrator | Tuesday 24 March 2026 01:53:49 +0000 (0:00:00.855) 0:00:04.870 ********* 2026-03-24 01:55:46.265377 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-24 01:55:46.265395 | orchestrator | ok: [testbed-manager] 2026-03-24 01:55:46.265408 | orchestrator | 2026-03-24 01:55:46.265421 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-24 01:55:46.265459 | orchestrator | Tuesday 24 March 2026 01:54:29 +0000 (0:00:40.165) 0:00:45.036 ********* 2026-03-24 01:55:46.265473 | orchestrator | changed: [testbed-manager] 2026-03-24 01:55:46.265485 | orchestrator | 2026-03-24 01:55:46.265498 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-24 01:55:46.265511 | orchestrator | Tuesday 24 March 2026 01:54:45 +0000 (0:00:15.843) 0:01:00.879 ********* 2026-03-24 01:55:46.265524 | orchestrator | Pausing for 60 seconds 2026-03-24 01:55:46.265537 | orchestrator | changed: [testbed-manager] 2026-03-24 01:55:46.265551 | orchestrator | 2026-03-24 01:55:46.265563 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-24 01:55:46.265576 | orchestrator | Tuesday 24 March 2026 01:55:45 +0000 (0:01:00.078) 0:02:00.958 ********* 2026-03-24 01:55:46.265588 | orchestrator | ok: [testbed-manager] 2026-03-24 01:55:46.265601 | orchestrator | 2026-03-24 01:55:46.265614 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-24 01:55:46.265627 | orchestrator | Tuesday 24 March 2026 01:55:45 +0000 (0:00:00.064) 0:02:01.022 ********* 2026-03-24 01:55:46.265647 | orchestrator | changed: [testbed-manager] 2026-03-24 01:55:46.265680 | orchestrator | 2026-03-24 01:55:46.265698 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:55:46.265716 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 01:55:46.265734 | orchestrator | 2026-03-24 01:55:46.265753 | orchestrator | 2026-03-24 01:55:46.265769 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 01:55:46.265786 | orchestrator | Tuesday 24 March 2026 01:55:45 +0000 (0:00:00.644) 0:02:01.666 ********* 2026-03-24 01:55:46.265804 | orchestrator | =============================================================================== 2026-03-24 01:55:46.265820 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-03-24 01:55:46.265837 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 40.17s 2026-03-24 01:55:46.265855 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.84s 2026-03-24 01:55:46.265895 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.26s 2026-03-24 01:55:46.265915 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.10s 2026-03-24 01:55:46.265930 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2026-03-24 01:55:46.265947 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.86s 2026-03-24 01:55:46.265965 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2026-03-24 01:55:46.265983 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-03-24 01:55:46.266001 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-03-24 01:55:46.266105 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-03-24 01:55:46.557964 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-24 01:55:46.558124 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-24 01:55:46.605501 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-24 01:55:46.605600 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-24 01:55:46.609762 | orchestrator | + set -e 2026-03-24 01:55:46.609821 | orchestrator | + NAMESPACE=kolla/release 2026-03-24 01:55:46.609837 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-24 01:55:46.615390 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-24 01:55:46.681812 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-24 01:55:46.681910 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-24 01:55:58.814780 | orchestrator | 2026-03-24 01:55:58 | INFO  | Task c4bd7888-a379-441f-902f-afff0e714689 (operator) was prepared for execution. 2026-03-24 01:55:58.814891 | orchestrator | 2026-03-24 01:55:58 | INFO  | It takes a moment until task c4bd7888-a379-441f-902f-afff0e714689 (operator) has been started and output is visible here. 2026-03-24 01:56:15.070134 | orchestrator | 2026-03-24 01:56:15.070308 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-24 01:56:15.070335 | orchestrator | 2026-03-24 01:56:15.070354 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 01:56:15.070373 | orchestrator | Tuesday 24 March 2026 01:56:02 +0000 (0:00:00.135) 0:00:00.135 ********* 2026-03-24 01:56:15.070392 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:56:15.070411 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:56:15.070429 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:56:15.070448 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:56:15.070467 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:56:15.070486 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:56:15.070505 | orchestrator | 2026-03-24 01:56:15.070525 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-24 01:56:15.070546 | orchestrator | Tuesday 24 March 2026 01:56:06 +0000 (0:00:03.475) 0:00:03.610 ********* 2026-03-24 01:56:15.070565 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:56:15.070585 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:56:15.070597 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:56:15.070627 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:56:15.070648 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:56:15.070667 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:56:15.070687 | orchestrator | 2026-03-24 01:56:15.070707 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-24 01:56:15.070727 | orchestrator | 2026-03-24 01:56:15.070746 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-24 01:56:15.070764 | orchestrator | Tuesday 24 March 2026 01:56:07 +0000 (0:00:00.915) 0:00:04.526 ********* 2026-03-24 01:56:15.070782 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:56:15.070800 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:56:15.070819 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:56:15.070838 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:56:15.070856 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:56:15.070877 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:56:15.070897 | orchestrator | 2026-03-24 01:56:15.070917 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-24 01:56:15.070932 | orchestrator | Tuesday 24 March 2026 01:56:07 +0000 (0:00:00.206) 0:00:04.733 ********* 2026-03-24 01:56:15.070944 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:56:15.070955 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:56:15.070966 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:56:15.070978 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:56:15.070989 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:56:15.071000 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:56:15.071012 | orchestrator | 2026-03-24 01:56:15.071023 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-24 01:56:15.071034 | orchestrator | Tuesday 24 March 2026 01:56:07 +0000 (0:00:00.170) 0:00:04.904 ********* 2026-03-24 01:56:15.071046 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:56:15.071058 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:56:15.071070 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:56:15.071081 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:56:15.071092 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:56:15.071103 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:56:15.071115 | orchestrator | 2026-03-24 01:56:15.071126 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-24 01:56:15.071137 | orchestrator | Tuesday 24 March 2026 01:56:08 +0000 (0:00:00.610) 0:00:05.514 ********* 2026-03-24 01:56:15.071196 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:56:15.071209 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:56:15.071221 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:56:15.071232 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:56:15.071243 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:56:15.071254 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:56:15.071292 | orchestrator | 2026-03-24 01:56:15.071304 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-24 01:56:15.071315 | orchestrator | Tuesday 24 March 2026 01:56:08 +0000 (0:00:00.785) 0:00:06.299 ********* 2026-03-24 01:56:15.071327 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-24 01:56:15.071338 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-24 01:56:15.071349 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-24 01:56:15.071360 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-24 01:56:15.071372 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-24 01:56:15.071383 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-24 01:56:15.071394 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-24 01:56:15.071405 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-24 01:56:15.071416 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-24 01:56:15.071427 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-24 01:56:15.071438 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-24 01:56:15.071449 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-24 01:56:15.071460 | orchestrator | 2026-03-24 01:56:15.071471 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-24 01:56:15.071482 | orchestrator | Tuesday 24 March 2026 01:56:10 +0000 (0:00:01.282) 0:00:07.581 ********* 2026-03-24 01:56:15.071493 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:56:15.071505 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:56:15.071516 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:56:15.071527 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:56:15.071538 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:56:15.071553 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:56:15.071572 | orchestrator | 2026-03-24 01:56:15.071592 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-24 01:56:15.071611 | orchestrator | Tuesday 24 March 2026 01:56:11 +0000 (0:00:01.287) 0:00:08.869 ********* 2026-03-24 01:56:15.071629 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-24 01:56:15.071648 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-24 01:56:15.071665 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-24 01:56:15.071682 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-24 01:56:15.071725 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-24 01:56:15.071745 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-24 01:56:15.071766 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-24 01:56:15.071784 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-24 01:56:15.071803 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-24 01:56:15.071822 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-24 01:56:15.071840 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-24 01:56:15.071859 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-24 01:56:15.071874 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-24 01:56:15.071891 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-24 01:56:15.071908 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-24 01:56:15.071925 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-24 01:56:15.071941 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-24 01:56:15.071959 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-24 01:56:15.071978 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-24 01:56:15.071996 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-24 01:56:15.072032 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-24 01:56:15.072051 | orchestrator | 2026-03-24 01:56:15.072070 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-24 01:56:15.072091 | orchestrator | Tuesday 24 March 2026 01:56:12 +0000 (0:00:01.245) 0:00:10.114 ********* 2026-03-24 01:56:15.072110 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:56:15.072130 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:56:15.072195 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:56:15.072209 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:56:15.072220 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:56:15.072232 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:56:15.072243 | orchestrator | 2026-03-24 01:56:15.072254 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-24 01:56:15.072265 | orchestrator | Tuesday 24 March 2026 01:56:12 +0000 (0:00:00.156) 0:00:10.270 ********* 2026-03-24 01:56:15.072277 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:56:15.072288 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:56:15.072314 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:56:15.072325 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:56:15.072346 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:56:15.072367 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:56:15.072388 | orchestrator | 2026-03-24 01:56:15.072480 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-24 01:56:15.072503 | orchestrator | Tuesday 24 March 2026 01:56:13 +0000 (0:00:00.173) 0:00:10.444 ********* 2026-03-24 01:56:15.072522 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:56:15.072541 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:56:15.072560 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:56:15.072580 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:56:15.072601 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:56:15.072620 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:56:15.072638 | orchestrator | 2026-03-24 01:56:15.072657 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-24 01:56:15.072678 | orchestrator | Tuesday 24 March 2026 01:56:13 +0000 (0:00:00.683) 0:00:11.128 ********* 2026-03-24 01:56:15.072697 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:56:15.072714 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:56:15.072732 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:56:15.072750 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:56:15.072768 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:56:15.072786 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:56:15.072805 | orchestrator | 2026-03-24 01:56:15.072824 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-24 01:56:15.072844 | orchestrator | Tuesday 24 March 2026 01:56:14 +0000 (0:00:00.191) 0:00:11.319 ********* 2026-03-24 01:56:15.072865 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-24 01:56:15.072903 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-24 01:56:15.072925 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-24 01:56:15.072946 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:56:15.072966 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:56:15.072984 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:56:15.072995 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-24 01:56:15.073007 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:56:15.073018 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 01:56:15.073029 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:56:15.073041 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-24 01:56:15.073052 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:56:15.073063 | orchestrator | 2026-03-24 01:56:15.073074 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-24 01:56:15.073085 | orchestrator | Tuesday 24 March 2026 01:56:14 +0000 (0:00:00.771) 0:00:12.091 ********* 2026-03-24 01:56:15.073109 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:56:15.073120 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:56:15.073131 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:56:15.073142 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:56:15.073268 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:56:15.073281 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:56:15.073293 | orchestrator | 2026-03-24 01:56:15.073304 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-24 01:56:15.073315 | orchestrator | Tuesday 24 March 2026 01:56:14 +0000 (0:00:00.147) 0:00:12.239 ********* 2026-03-24 01:56:15.073327 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:56:15.073338 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:56:15.073349 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:56:15.073360 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:56:15.073390 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:56:16.512205 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:56:16.512289 | orchestrator | 2026-03-24 01:56:16.512302 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-24 01:56:16.512311 | orchestrator | Tuesday 24 March 2026 01:56:15 +0000 (0:00:00.135) 0:00:12.374 ********* 2026-03-24 01:56:16.512320 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:56:16.512328 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:56:16.512337 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:56:16.512345 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:56:16.512353 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:56:16.512361 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:56:16.512369 | orchestrator | 2026-03-24 01:56:16.512378 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-24 01:56:16.512391 | orchestrator | Tuesday 24 March 2026 01:56:15 +0000 (0:00:00.141) 0:00:12.516 ********* 2026-03-24 01:56:16.512404 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:56:16.512417 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:56:16.512440 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:56:16.512453 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:56:16.512464 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:56:16.512476 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:56:16.512487 | orchestrator | 2026-03-24 01:56:16.512500 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-24 01:56:16.512512 | orchestrator | Tuesday 24 March 2026 01:56:15 +0000 (0:00:00.779) 0:00:13.296 ********* 2026-03-24 01:56:16.512524 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:56:16.512538 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:56:16.512552 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:56:16.512565 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:56:16.512579 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:56:16.512591 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:56:16.512604 | orchestrator | 2026-03-24 01:56:16.512617 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:56:16.512632 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 01:56:16.512646 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 01:56:16.512659 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 01:56:16.512673 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 01:56:16.512686 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 01:56:16.512722 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 01:56:16.512734 | orchestrator | 2026-03-24 01:56:16.512747 | orchestrator | 2026-03-24 01:56:16.512760 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 01:56:16.512772 | orchestrator | Tuesday 24 March 2026 01:56:16 +0000 (0:00:00.257) 0:00:13.554 ********* 2026-03-24 01:56:16.512784 | orchestrator | =============================================================================== 2026-03-24 01:56:16.512796 | orchestrator | Gathering Facts --------------------------------------------------------- 3.48s 2026-03-24 01:56:16.512809 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.29s 2026-03-24 01:56:16.512822 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.28s 2026-03-24 01:56:16.512834 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2026-03-24 01:56:16.512849 | orchestrator | Do not require tty for all users ---------------------------------------- 0.92s 2026-03-24 01:56:16.512862 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-03-24 01:56:16.512875 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.78s 2026-03-24 01:56:16.512888 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.77s 2026-03-24 01:56:16.512902 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.68s 2026-03-24 01:56:16.512915 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-03-24 01:56:16.512928 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.26s 2026-03-24 01:56:16.512940 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.21s 2026-03-24 01:56:16.512954 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2026-03-24 01:56:16.512967 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-03-24 01:56:16.512980 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2026-03-24 01:56:16.512992 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-03-24 01:56:16.513006 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-24 01:56:16.513019 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-03-24 01:56:16.513034 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-03-24 01:56:16.789873 | orchestrator | + osism apply --environment custom facts 2026-03-24 01:56:18.672529 | orchestrator | 2026-03-24 01:56:18 | INFO  | Trying to run play facts in environment custom 2026-03-24 01:56:28.790601 | orchestrator | 2026-03-24 01:56:28 | INFO  | Task 2a3f50fb-ffb3-46fb-ad2d-e1c3c86af3a2 (facts) was prepared for execution. 2026-03-24 01:56:28.790716 | orchestrator | 2026-03-24 01:56:28 | INFO  | It takes a moment until task 2a3f50fb-ffb3-46fb-ad2d-e1c3c86af3a2 (facts) has been started and output is visible here. 2026-03-24 01:57:15.241652 | orchestrator | 2026-03-24 01:57:15.241784 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-24 01:57:15.241809 | orchestrator | 2026-03-24 01:57:15.241827 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-24 01:57:15.241846 | orchestrator | Tuesday 24 March 2026 01:56:32 +0000 (0:00:00.081) 0:00:00.081 ********* 2026-03-24 01:57:15.241857 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:15.241870 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:15.241882 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:15.241893 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:15.241905 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:15.241915 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:15.241955 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:15.241966 | orchestrator | 2026-03-24 01:57:15.241977 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-24 01:57:15.241988 | orchestrator | Tuesday 24 March 2026 01:56:34 +0000 (0:00:01.405) 0:00:01.486 ********* 2026-03-24 01:57:15.241998 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:15.242009 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:15.242103 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:15.242120 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:15.242135 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:15.242151 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:15.242167 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:15.242201 | orchestrator | 2026-03-24 01:57:15.242217 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-24 01:57:15.242233 | orchestrator | 2026-03-24 01:57:15.242248 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-24 01:57:15.242263 | orchestrator | Tuesday 24 March 2026 01:56:35 +0000 (0:00:01.221) 0:00:02.707 ********* 2026-03-24 01:57:15.242276 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.242296 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.242314 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.242326 | orchestrator | 2026-03-24 01:57:15.242337 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-24 01:57:15.242351 | orchestrator | Tuesday 24 March 2026 01:56:35 +0000 (0:00:00.112) 0:00:02.820 ********* 2026-03-24 01:57:15.242371 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.242390 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.242402 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.242415 | orchestrator | 2026-03-24 01:57:15.242434 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-24 01:57:15.242452 | orchestrator | Tuesday 24 March 2026 01:56:35 +0000 (0:00:00.194) 0:00:03.015 ********* 2026-03-24 01:57:15.242464 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.242483 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.242500 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.242512 | orchestrator | 2026-03-24 01:57:15.242531 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-24 01:57:15.242548 | orchestrator | Tuesday 24 March 2026 01:56:35 +0000 (0:00:00.220) 0:00:03.235 ********* 2026-03-24 01:57:15.242563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 01:57:15.242580 | orchestrator | 2026-03-24 01:57:15.242594 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-24 01:57:15.242606 | orchestrator | Tuesday 24 March 2026 01:56:35 +0000 (0:00:00.129) 0:00:03.365 ********* 2026-03-24 01:57:15.242620 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.242635 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.242648 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.242660 | orchestrator | 2026-03-24 01:57:15.242675 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-24 01:57:15.242689 | orchestrator | Tuesday 24 March 2026 01:56:36 +0000 (0:00:00.427) 0:00:03.792 ********* 2026-03-24 01:57:15.242702 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:15.242717 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:15.242733 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:15.242745 | orchestrator | 2026-03-24 01:57:15.242758 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-24 01:57:15.242774 | orchestrator | Tuesday 24 March 2026 01:56:36 +0000 (0:00:00.125) 0:00:03.918 ********* 2026-03-24 01:57:15.242789 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:15.242801 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:15.242816 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:15.242832 | orchestrator | 2026-03-24 01:57:15.242846 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-24 01:57:15.242883 | orchestrator | Tuesday 24 March 2026 01:56:37 +0000 (0:00:01.038) 0:00:04.957 ********* 2026-03-24 01:57:15.242896 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.242910 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.242925 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.242939 | orchestrator | 2026-03-24 01:57:15.242952 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-24 01:57:15.242967 | orchestrator | Tuesday 24 March 2026 01:56:38 +0000 (0:00:00.443) 0:00:05.401 ********* 2026-03-24 01:57:15.242981 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:15.242994 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:15.243006 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:15.243020 | orchestrator | 2026-03-24 01:57:15.243032 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-24 01:57:15.243098 | orchestrator | Tuesday 24 March 2026 01:56:39 +0000 (0:00:01.083) 0:00:06.484 ********* 2026-03-24 01:57:15.243112 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:15.243122 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:15.243132 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:15.243141 | orchestrator | 2026-03-24 01:57:15.243151 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-24 01:57:15.243161 | orchestrator | Tuesday 24 March 2026 01:56:56 +0000 (0:00:17.340) 0:00:23.824 ********* 2026-03-24 01:57:15.243171 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:15.243250 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:15.243263 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:15.243274 | orchestrator | 2026-03-24 01:57:15.243286 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-24 01:57:15.243317 | orchestrator | Tuesday 24 March 2026 01:56:56 +0000 (0:00:00.088) 0:00:23.913 ********* 2026-03-24 01:57:15.243325 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:15.243331 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:15.243338 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:15.243345 | orchestrator | 2026-03-24 01:57:15.243356 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-24 01:57:15.243363 | orchestrator | Tuesday 24 March 2026 01:57:05 +0000 (0:00:09.328) 0:00:33.242 ********* 2026-03-24 01:57:15.243369 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.243376 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.243383 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.243389 | orchestrator | 2026-03-24 01:57:15.243395 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-24 01:57:15.243402 | orchestrator | Tuesday 24 March 2026 01:57:06 +0000 (0:00:00.470) 0:00:33.713 ********* 2026-03-24 01:57:15.243409 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-24 01:57:15.243416 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-24 01:57:15.243422 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-24 01:57:15.243429 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-24 01:57:15.243435 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-24 01:57:15.243442 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-24 01:57:15.243448 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-24 01:57:15.243455 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-24 01:57:15.243461 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-24 01:57:15.243468 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-24 01:57:15.243474 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-24 01:57:15.243481 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-24 01:57:15.243487 | orchestrator | 2026-03-24 01:57:15.243493 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-24 01:57:15.243509 | orchestrator | Tuesday 24 March 2026 01:57:09 +0000 (0:00:03.629) 0:00:37.343 ********* 2026-03-24 01:57:15.243516 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.243522 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.243528 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.243535 | orchestrator | 2026-03-24 01:57:15.243541 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-24 01:57:15.243548 | orchestrator | 2026-03-24 01:57:15.243554 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 01:57:15.243561 | orchestrator | Tuesday 24 March 2026 01:57:11 +0000 (0:00:01.517) 0:00:38.860 ********* 2026-03-24 01:57:15.243567 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:15.243574 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:15.243580 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:15.243587 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:15.243593 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:15.243600 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:15.243606 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:15.243613 | orchestrator | 2026-03-24 01:57:15.243619 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 01:57:15.243626 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 01:57:15.243633 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 01:57:15.243640 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 01:57:15.243647 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 01:57:15.243653 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 01:57:15.243660 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 01:57:15.243667 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 01:57:15.243673 | orchestrator | 2026-03-24 01:57:15.243680 | orchestrator | 2026-03-24 01:57:15.243686 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 01:57:15.243693 | orchestrator | Tuesday 24 March 2026 01:57:15 +0000 (0:00:03.745) 0:00:42.606 ********* 2026-03-24 01:57:15.243699 | orchestrator | =============================================================================== 2026-03-24 01:57:15.243706 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.34s 2026-03-24 01:57:15.243712 | orchestrator | Install required packages (Debian) -------------------------------------- 9.33s 2026-03-24 01:57:15.243719 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.75s 2026-03-24 01:57:15.243725 | orchestrator | Copy fact files --------------------------------------------------------- 3.63s 2026-03-24 01:57:15.243732 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.52s 2026-03-24 01:57:15.243738 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-03-24 01:57:15.243749 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-03-24 01:57:15.458367 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-03-24 01:57:15.458464 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2026-03-24 01:57:15.458496 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-03-24 01:57:15.458529 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-24 01:57:15.458540 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-03-24 01:57:15.458551 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-24 01:57:15.458561 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2026-03-24 01:57:15.458571 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-03-24 01:57:15.458582 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-03-24 01:57:15.458593 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-24 01:57:15.458604 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-24 01:57:15.736733 | orchestrator | + osism apply bootstrap 2026-03-24 01:57:27.669160 | orchestrator | 2026-03-24 01:57:27 | INFO  | Task b94c449c-71fc-4fe6-b3b8-266bd71aafd5 (bootstrap) was prepared for execution. 2026-03-24 01:57:27.669287 | orchestrator | 2026-03-24 01:57:27 | INFO  | It takes a moment until task b94c449c-71fc-4fe6-b3b8-266bd71aafd5 (bootstrap) has been started and output is visible here. 2026-03-24 01:57:43.273941 | orchestrator | 2026-03-24 01:57:43.274157 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-24 01:57:43.274185 | orchestrator | 2026-03-24 01:57:43.274280 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-24 01:57:43.274303 | orchestrator | Tuesday 24 March 2026 01:57:31 +0000 (0:00:00.113) 0:00:00.113 ********* 2026-03-24 01:57:43.274323 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:43.274344 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:43.274363 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:43.274383 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:43.274402 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:43.274421 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:43.274435 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:43.274447 | orchestrator | 2026-03-24 01:57:43.274461 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-24 01:57:43.274475 | orchestrator | 2026-03-24 01:57:43.274488 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 01:57:43.274502 | orchestrator | Tuesday 24 March 2026 01:57:31 +0000 (0:00:00.172) 0:00:00.285 ********* 2026-03-24 01:57:43.274515 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:43.274528 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:43.274541 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:43.274555 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:43.274568 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:43.274581 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:43.274594 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:43.274607 | orchestrator | 2026-03-24 01:57:43.274620 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-24 01:57:43.274633 | orchestrator | 2026-03-24 01:57:43.274646 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 01:57:43.274660 | orchestrator | Tuesday 24 March 2026 01:57:35 +0000 (0:00:03.566) 0:00:03.852 ********* 2026-03-24 01:57:43.274674 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-24 01:57:43.274687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-24 01:57:43.274700 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-24 01:57:43.274713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 01:57:43.274726 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-24 01:57:43.274739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 01:57:43.274752 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-24 01:57:43.274765 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-24 01:57:43.274785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 01:57:43.274842 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-24 01:57:43.274862 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-24 01:57:43.274882 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-24 01:57:43.274902 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-24 01:57:43.274921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 01:57:43.274937 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 01:57:43.274955 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:43.274975 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-24 01:57:43.274992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 01:57:43.275011 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 01:57:43.275031 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 01:57:43.275051 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-24 01:57:43.275070 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 01:57:43.275083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 01:57:43.275095 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-24 01:57:43.275106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 01:57:43.275117 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 01:57:43.275128 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:43.275139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 01:57:43.275150 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 01:57:43.275161 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 01:57:43.275172 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 01:57:43.275183 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 01:57:43.275233 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 01:57:43.275254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 01:57:43.275274 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 01:57:43.275292 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 01:57:43.275311 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 01:57:43.275323 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 01:57:43.275334 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 01:57:43.275345 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 01:57:43.275356 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:57:43.275368 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 01:57:43.275379 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 01:57:43.275390 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:43.275402 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 01:57:43.275413 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 01:57:43.275446 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 01:57:43.275458 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 01:57:43.275469 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 01:57:43.275481 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 01:57:43.275492 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:57:43.275504 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 01:57:43.275515 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 01:57:43.275526 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:43.275549 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 01:57:43.275579 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:57:43.275591 | orchestrator | 2026-03-24 01:57:43.275603 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-24 01:57:43.275614 | orchestrator | 2026-03-24 01:57:43.275626 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-24 01:57:43.275637 | orchestrator | Tuesday 24 March 2026 01:57:35 +0000 (0:00:00.381) 0:00:04.233 ********* 2026-03-24 01:57:43.275649 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:43.275660 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:43.275671 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:43.275683 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:43.275694 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:43.275706 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:43.275717 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:43.275728 | orchestrator | 2026-03-24 01:57:43.275740 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-24 01:57:43.275751 | orchestrator | Tuesday 24 March 2026 01:57:37 +0000 (0:00:01.264) 0:00:05.497 ********* 2026-03-24 01:57:43.275763 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:43.275774 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:43.275878 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:43.275891 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:43.275902 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:43.275914 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:43.275925 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:43.275937 | orchestrator | 2026-03-24 01:57:43.275949 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-24 01:57:43.275960 | orchestrator | Tuesday 24 March 2026 01:57:38 +0000 (0:00:01.303) 0:00:06.801 ********* 2026-03-24 01:57:43.275973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:57:43.275987 | orchestrator | 2026-03-24 01:57:43.275999 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-24 01:57:43.276010 | orchestrator | Tuesday 24 March 2026 01:57:38 +0000 (0:00:00.224) 0:00:07.025 ********* 2026-03-24 01:57:43.276022 | orchestrator | changed: [testbed-manager] 2026-03-24 01:57:43.276033 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:43.276049 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:43.276068 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:43.276088 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:43.276107 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:43.276126 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:43.276144 | orchestrator | 2026-03-24 01:57:43.276163 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-24 01:57:43.276180 | orchestrator | Tuesday 24 March 2026 01:57:40 +0000 (0:00:02.066) 0:00:09.092 ********* 2026-03-24 01:57:43.276260 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:43.276282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:57:43.276303 | orchestrator | 2026-03-24 01:57:43.276322 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-24 01:57:43.276343 | orchestrator | Tuesday 24 March 2026 01:57:41 +0000 (0:00:00.261) 0:00:09.353 ********* 2026-03-24 01:57:43.276363 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:43.276383 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:43.276403 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:43.276422 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:43.276442 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:43.276462 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:43.276497 | orchestrator | 2026-03-24 01:57:43.276517 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-24 01:57:43.276529 | orchestrator | Tuesday 24 March 2026 01:57:42 +0000 (0:00:01.087) 0:00:10.441 ********* 2026-03-24 01:57:43.276540 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:43.276552 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:43.276563 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:43.276574 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:43.276585 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:43.276596 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:43.276607 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:43.276618 | orchestrator | 2026-03-24 01:57:43.276630 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-24 01:57:43.276641 | orchestrator | Tuesday 24 March 2026 01:57:42 +0000 (0:00:00.621) 0:00:11.062 ********* 2026-03-24 01:57:43.276653 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:43.276664 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:43.276675 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:43.276686 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:57:43.276698 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:57:43.276709 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:57:43.276720 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:43.276731 | orchestrator | 2026-03-24 01:57:43.276743 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-24 01:57:43.276755 | orchestrator | Tuesday 24 March 2026 01:57:43 +0000 (0:00:00.402) 0:00:11.465 ********* 2026-03-24 01:57:43.276767 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:43.276778 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:43.276803 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:55.356173 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:55.357280 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:57:55.357338 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:57:55.357348 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:57:55.357357 | orchestrator | 2026-03-24 01:57:55.357367 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-24 01:57:55.357376 | orchestrator | Tuesday 24 March 2026 01:57:43 +0000 (0:00:00.205) 0:00:11.670 ********* 2026-03-24 01:57:55.357387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:57:55.357409 | orchestrator | 2026-03-24 01:57:55.357417 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-24 01:57:55.357427 | orchestrator | Tuesday 24 March 2026 01:57:43 +0000 (0:00:00.276) 0:00:11.947 ********* 2026-03-24 01:57:55.357436 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:57:55.357444 | orchestrator | 2026-03-24 01:57:55.357452 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-24 01:57:55.357461 | orchestrator | Tuesday 24 March 2026 01:57:43 +0000 (0:00:00.282) 0:00:12.230 ********* 2026-03-24 01:57:55.357469 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.357478 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.357486 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.357494 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.357503 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.357511 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.357518 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.357526 | orchestrator | 2026-03-24 01:57:55.357534 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-24 01:57:55.357542 | orchestrator | Tuesday 24 March 2026 01:57:45 +0000 (0:00:01.513) 0:00:13.743 ********* 2026-03-24 01:57:55.357574 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:55.357586 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:55.357597 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:55.357608 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:55.357620 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:57:55.357631 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:57:55.357642 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:57:55.357653 | orchestrator | 2026-03-24 01:57:55.357665 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-24 01:57:55.357677 | orchestrator | Tuesday 24 March 2026 01:57:45 +0000 (0:00:00.244) 0:00:13.988 ********* 2026-03-24 01:57:55.357689 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.357700 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.357711 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.357723 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.357735 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.357747 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.357758 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.357769 | orchestrator | 2026-03-24 01:57:55.357781 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-24 01:57:55.357793 | orchestrator | Tuesday 24 March 2026 01:57:46 +0000 (0:00:00.575) 0:00:14.563 ********* 2026-03-24 01:57:55.357806 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:55.357818 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:55.357829 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:55.357841 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:55.357853 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:57:55.357864 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:57:55.357876 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:57:55.357888 | orchestrator | 2026-03-24 01:57:55.357900 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-24 01:57:55.357913 | orchestrator | Tuesday 24 March 2026 01:57:46 +0000 (0:00:00.306) 0:00:14.870 ********* 2026-03-24 01:57:55.357926 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.357937 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:55.357948 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:55.357959 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:55.357970 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:55.357982 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:55.358005 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:55.358097 | orchestrator | 2026-03-24 01:57:55.358114 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-24 01:57:55.358125 | orchestrator | Tuesday 24 March 2026 01:57:47 +0000 (0:00:00.536) 0:00:15.406 ********* 2026-03-24 01:57:55.358136 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.358147 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:55.358158 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:55.358170 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:55.358181 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:55.358192 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:55.358225 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:55.358238 | orchestrator | 2026-03-24 01:57:55.358249 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-24 01:57:55.358261 | orchestrator | Tuesday 24 March 2026 01:57:48 +0000 (0:00:01.069) 0:00:16.475 ********* 2026-03-24 01:57:55.358272 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.358284 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.358296 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.358307 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.358319 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.358330 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.358341 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.358352 | orchestrator | 2026-03-24 01:57:55.358379 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-24 01:57:55.358391 | orchestrator | Tuesday 24 March 2026 01:57:49 +0000 (0:00:01.067) 0:00:17.543 ********* 2026-03-24 01:57:55.358431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:57:55.358444 | orchestrator | 2026-03-24 01:57:55.358456 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-24 01:57:55.358468 | orchestrator | Tuesday 24 March 2026 01:57:49 +0000 (0:00:00.297) 0:00:17.840 ********* 2026-03-24 01:57:55.358478 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:55.358488 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:55.358499 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:57:55.358564 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:57:55.358576 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:55.358586 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:55.358596 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:57:55.358606 | orchestrator | 2026-03-24 01:57:55.358616 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-24 01:57:55.358627 | orchestrator | Tuesday 24 March 2026 01:57:50 +0000 (0:00:01.367) 0:00:19.208 ********* 2026-03-24 01:57:55.358637 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.358648 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.358659 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.358669 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.358680 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.358691 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.358702 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.358712 | orchestrator | 2026-03-24 01:57:55.358722 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-24 01:57:55.358733 | orchestrator | Tuesday 24 March 2026 01:57:51 +0000 (0:00:00.202) 0:00:19.410 ********* 2026-03-24 01:57:55.358775 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.358786 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.358796 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.358807 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.358818 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.358829 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.358840 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.358851 | orchestrator | 2026-03-24 01:57:55.358862 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-24 01:57:55.358873 | orchestrator | Tuesday 24 March 2026 01:57:51 +0000 (0:00:00.199) 0:00:19.610 ********* 2026-03-24 01:57:55.358884 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.358894 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.358905 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.358915 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.358926 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.358935 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.358945 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.358955 | orchestrator | 2026-03-24 01:57:55.358966 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-24 01:57:55.358975 | orchestrator | Tuesday 24 March 2026 01:57:51 +0000 (0:00:00.220) 0:00:19.831 ********* 2026-03-24 01:57:55.358987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:57:55.358999 | orchestrator | 2026-03-24 01:57:55.359009 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-24 01:57:55.359019 | orchestrator | Tuesday 24 March 2026 01:57:51 +0000 (0:00:00.298) 0:00:20.129 ********* 2026-03-24 01:57:55.359029 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.359039 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.359061 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.359071 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.359080 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.359091 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.359102 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.359112 | orchestrator | 2026-03-24 01:57:55.359122 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-24 01:57:55.359132 | orchestrator | Tuesday 24 March 2026 01:57:52 +0000 (0:00:00.537) 0:00:20.666 ********* 2026-03-24 01:57:55.359142 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:57:55.359153 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:57:55.359163 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:57:55.359174 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:57:55.359185 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:57:55.359196 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:57:55.359229 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:57:55.359241 | orchestrator | 2026-03-24 01:57:55.359252 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-24 01:57:55.359264 | orchestrator | Tuesday 24 March 2026 01:57:52 +0000 (0:00:00.222) 0:00:20.889 ********* 2026-03-24 01:57:55.359277 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.359290 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.359302 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.359315 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.359327 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:57:55.359339 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:57:55.359352 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:57:55.359365 | orchestrator | 2026-03-24 01:57:55.359378 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-24 01:57:55.359391 | orchestrator | Tuesday 24 March 2026 01:57:53 +0000 (0:00:01.037) 0:00:21.926 ********* 2026-03-24 01:57:55.359404 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.359417 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.359429 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.359442 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.359454 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:57:55.359466 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:57:55.359479 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:57:55.359491 | orchestrator | 2026-03-24 01:57:55.359504 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-24 01:57:55.359516 | orchestrator | Tuesday 24 March 2026 01:57:54 +0000 (0:00:00.566) 0:00:22.493 ********* 2026-03-24 01:57:55.359528 | orchestrator | ok: [testbed-manager] 2026-03-24 01:57:55.359540 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:57:55.359553 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:57:55.359579 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:57:55.359607 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:58:35.317561 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:58:35.317682 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:58:35.317700 | orchestrator | 2026-03-24 01:58:35.317715 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-24 01:58:35.317730 | orchestrator | Tuesday 24 March 2026 01:57:55 +0000 (0:00:01.169) 0:00:23.662 ********* 2026-03-24 01:58:35.317742 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.317756 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.317768 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.317779 | orchestrator | changed: [testbed-manager] 2026-03-24 01:58:35.317790 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:58:35.317803 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:58:35.317816 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:58:35.317829 | orchestrator | 2026-03-24 01:58:35.317842 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-24 01:58:35.317854 | orchestrator | Tuesday 24 March 2026 01:58:12 +0000 (0:00:17.232) 0:00:40.895 ********* 2026-03-24 01:58:35.317866 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.317908 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.317922 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.317933 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.317945 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.317956 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.317969 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.317980 | orchestrator | 2026-03-24 01:58:35.317993 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-24 01:58:35.318005 | orchestrator | Tuesday 24 March 2026 01:58:12 +0000 (0:00:00.229) 0:00:41.125 ********* 2026-03-24 01:58:35.318071 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.318090 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.318103 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.318117 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.318129 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.318143 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.318156 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.318168 | orchestrator | 2026-03-24 01:58:35.318181 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-24 01:58:35.318194 | orchestrator | Tuesday 24 March 2026 01:58:13 +0000 (0:00:00.226) 0:00:41.351 ********* 2026-03-24 01:58:35.318206 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.318220 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.318257 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.318271 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.318283 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.318295 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.318308 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.318320 | orchestrator | 2026-03-24 01:58:35.318333 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-24 01:58:35.318345 | orchestrator | Tuesday 24 March 2026 01:58:13 +0000 (0:00:00.258) 0:00:41.610 ********* 2026-03-24 01:58:35.318360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:58:35.318371 | orchestrator | 2026-03-24 01:58:35.318379 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-24 01:58:35.318387 | orchestrator | Tuesday 24 March 2026 01:58:13 +0000 (0:00:00.301) 0:00:41.911 ********* 2026-03-24 01:58:35.318395 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.318402 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.318410 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.318417 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.318425 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.318433 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.318440 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.318448 | orchestrator | 2026-03-24 01:58:35.318455 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-24 01:58:35.318463 | orchestrator | Tuesday 24 March 2026 01:58:15 +0000 (0:00:02.051) 0:00:43.963 ********* 2026-03-24 01:58:35.318471 | orchestrator | changed: [testbed-manager] 2026-03-24 01:58:35.318478 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:58:35.318486 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:58:35.318494 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:58:35.318501 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:58:35.318509 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:58:35.318516 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:58:35.318524 | orchestrator | 2026-03-24 01:58:35.318531 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-24 01:58:35.318552 | orchestrator | Tuesday 24 March 2026 01:58:16 +0000 (0:00:01.180) 0:00:45.143 ********* 2026-03-24 01:58:35.318560 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.318567 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.318575 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.318591 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.318599 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.318607 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.318614 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.318622 | orchestrator | 2026-03-24 01:58:35.318629 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-24 01:58:35.318639 | orchestrator | Tuesday 24 March 2026 01:58:17 +0000 (0:00:00.816) 0:00:45.960 ********* 2026-03-24 01:58:35.318653 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:58:35.318667 | orchestrator | 2026-03-24 01:58:35.318686 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-24 01:58:35.318701 | orchestrator | Tuesday 24 March 2026 01:58:17 +0000 (0:00:00.255) 0:00:46.216 ********* 2026-03-24 01:58:35.318713 | orchestrator | changed: [testbed-manager] 2026-03-24 01:58:35.318726 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:58:35.318739 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:58:35.318752 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:58:35.318764 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:58:35.318776 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:58:35.318784 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:58:35.318792 | orchestrator | 2026-03-24 01:58:35.318818 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-24 01:58:35.318827 | orchestrator | Tuesday 24 March 2026 01:58:18 +0000 (0:00:01.069) 0:00:47.285 ********* 2026-03-24 01:58:35.318834 | orchestrator | skipping: [testbed-manager] 2026-03-24 01:58:35.318842 | orchestrator | skipping: [testbed-node-3] 2026-03-24 01:58:35.318849 | orchestrator | skipping: [testbed-node-4] 2026-03-24 01:58:35.318857 | orchestrator | skipping: [testbed-node-5] 2026-03-24 01:58:35.318865 | orchestrator | skipping: [testbed-node-0] 2026-03-24 01:58:35.318872 | orchestrator | skipping: [testbed-node-1] 2026-03-24 01:58:35.318880 | orchestrator | skipping: [testbed-node-2] 2026-03-24 01:58:35.318887 | orchestrator | 2026-03-24 01:58:35.318895 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-24 01:58:35.318902 | orchestrator | Tuesday 24 March 2026 01:58:19 +0000 (0:00:00.227) 0:00:47.513 ********* 2026-03-24 01:58:35.318910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:58:35.318918 | orchestrator | 2026-03-24 01:58:35.318926 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-24 01:58:35.318933 | orchestrator | Tuesday 24 March 2026 01:58:19 +0000 (0:00:00.294) 0:00:47.808 ********* 2026-03-24 01:58:35.318941 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.318949 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.318956 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.318964 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.318971 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.318979 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.318986 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.318993 | orchestrator | 2026-03-24 01:58:35.319001 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-24 01:58:35.319009 | orchestrator | Tuesday 24 March 2026 01:58:21 +0000 (0:00:01.929) 0:00:49.737 ********* 2026-03-24 01:58:35.319016 | orchestrator | changed: [testbed-manager] 2026-03-24 01:58:35.319024 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:58:35.319032 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:58:35.319039 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:58:35.319046 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:58:35.319054 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:58:35.319062 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:58:35.319076 | orchestrator | 2026-03-24 01:58:35.319084 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-24 01:58:35.319092 | orchestrator | Tuesday 24 March 2026 01:58:22 +0000 (0:00:01.129) 0:00:50.866 ********* 2026-03-24 01:58:35.319100 | orchestrator | changed: [testbed-node-5] 2026-03-24 01:58:35.319107 | orchestrator | changed: [testbed-node-4] 2026-03-24 01:58:35.319115 | orchestrator | changed: [testbed-node-2] 2026-03-24 01:58:35.319122 | orchestrator | changed: [testbed-node-3] 2026-03-24 01:58:35.319130 | orchestrator | changed: [testbed-node-1] 2026-03-24 01:58:35.319137 | orchestrator | changed: [testbed-node-0] 2026-03-24 01:58:35.319145 | orchestrator | changed: [testbed-manager] 2026-03-24 01:58:35.319152 | orchestrator | 2026-03-24 01:58:35.319160 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-24 01:58:35.319168 | orchestrator | Tuesday 24 March 2026 01:58:32 +0000 (0:00:10.329) 0:01:01.196 ********* 2026-03-24 01:58:35.319175 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.319182 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.319190 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.319197 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.319205 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.319212 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.319220 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.319227 | orchestrator | 2026-03-24 01:58:35.319251 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-24 01:58:35.319259 | orchestrator | Tuesday 24 March 2026 01:58:33 +0000 (0:00:00.775) 0:01:01.972 ********* 2026-03-24 01:58:35.319267 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.319274 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.319281 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.319289 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.319296 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.319303 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.319311 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.319318 | orchestrator | 2026-03-24 01:58:35.319326 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-24 01:58:35.319333 | orchestrator | Tuesday 24 March 2026 01:58:34 +0000 (0:00:00.884) 0:01:02.856 ********* 2026-03-24 01:58:35.319346 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.319354 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.319361 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.319368 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.319376 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.319383 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.319391 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.319398 | orchestrator | 2026-03-24 01:58:35.319405 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-24 01:58:35.319413 | orchestrator | Tuesday 24 March 2026 01:58:34 +0000 (0:00:00.261) 0:01:03.118 ********* 2026-03-24 01:58:35.319421 | orchestrator | ok: [testbed-manager] 2026-03-24 01:58:35.319428 | orchestrator | ok: [testbed-node-3] 2026-03-24 01:58:35.319435 | orchestrator | ok: [testbed-node-4] 2026-03-24 01:58:35.319443 | orchestrator | ok: [testbed-node-5] 2026-03-24 01:58:35.319450 | orchestrator | ok: [testbed-node-0] 2026-03-24 01:58:35.319457 | orchestrator | ok: [testbed-node-1] 2026-03-24 01:58:35.319465 | orchestrator | ok: [testbed-node-2] 2026-03-24 01:58:35.319472 | orchestrator | 2026-03-24 01:58:35.319479 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-24 01:58:35.319487 | orchestrator | Tuesday 24 March 2026 01:58:35 +0000 (0:00:00.240) 0:01:03.359 ********* 2026-03-24 01:58:35.319495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 01:58:35.319503 | orchestrator | 2026-03-24 01:58:35.319516 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-24 02:01:12.127586 | orchestrator | Tuesday 24 March 2026 01:58:35 +0000 (0:00:00.270) 0:01:03.629 ********* 2026-03-24 02:01:12.127707 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:12.127726 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:12.127740 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:12.127752 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:12.127763 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:12.127774 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:12.127786 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:12.127798 | orchestrator | 2026-03-24 02:01:12.127811 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-24 02:01:12.127823 | orchestrator | Tuesday 24 March 2026 01:58:37 +0000 (0:00:01.896) 0:01:05.526 ********* 2026-03-24 02:01:12.127835 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:12.127847 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:01:12.127859 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:01:12.127870 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:01:12.127882 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:01:12.127893 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:01:12.127905 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:01:12.127916 | orchestrator | 2026-03-24 02:01:12.127928 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-24 02:01:12.127941 | orchestrator | Tuesday 24 March 2026 01:58:37 +0000 (0:00:00.554) 0:01:06.081 ********* 2026-03-24 02:01:12.127953 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:12.127964 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:12.127976 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:12.127987 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:12.127999 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:12.128010 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:12.128021 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:12.128033 | orchestrator | 2026-03-24 02:01:12.128045 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-24 02:01:12.128058 | orchestrator | Tuesday 24 March 2026 01:58:37 +0000 (0:00:00.197) 0:01:06.278 ********* 2026-03-24 02:01:12.128069 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:12.128081 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:12.128093 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:12.128104 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:12.128117 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:12.128131 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:12.128144 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:12.128157 | orchestrator | 2026-03-24 02:01:12.128171 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-24 02:01:12.128184 | orchestrator | Tuesday 24 March 2026 01:58:39 +0000 (0:00:01.336) 0:01:07.614 ********* 2026-03-24 02:01:12.128197 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:12.128210 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:01:12.128224 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:01:12.128237 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:01:12.128250 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:01:12.128263 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:01:12.128277 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:01:12.128290 | orchestrator | 2026-03-24 02:01:12.128308 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-24 02:01:12.128349 | orchestrator | Tuesday 24 March 2026 01:58:41 +0000 (0:00:01.963) 0:01:09.578 ********* 2026-03-24 02:01:12.128362 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:12.128375 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:12.128388 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:12.128401 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:12.128414 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:12.128427 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:12.128440 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:12.128453 | orchestrator | 2026-03-24 02:01:12.128477 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-24 02:01:12.128516 | orchestrator | Tuesday 24 March 2026 01:58:43 +0000 (0:00:02.672) 0:01:12.250 ********* 2026-03-24 02:01:12.128528 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:12.128539 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:12.128551 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:12.128562 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:12.128573 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:12.128584 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:12.128596 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:12.128607 | orchestrator | 2026-03-24 02:01:12.128618 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-24 02:01:12.128629 | orchestrator | Tuesday 24 March 2026 01:59:38 +0000 (0:00:54.976) 0:02:07.227 ********* 2026-03-24 02:01:12.128641 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:12.128652 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:01:12.128664 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:01:12.128675 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:01:12.128686 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:01:12.128698 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:01:12.128709 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:01:12.128720 | orchestrator | 2026-03-24 02:01:12.128732 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-24 02:01:12.128743 | orchestrator | Tuesday 24 March 2026 02:00:57 +0000 (0:01:18.979) 0:03:26.206 ********* 2026-03-24 02:01:12.128755 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:12.128766 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:12.128777 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:12.128789 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:12.128800 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:12.128811 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:12.128823 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:12.128834 | orchestrator | 2026-03-24 02:01:12.128845 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-24 02:01:12.128857 | orchestrator | Tuesday 24 March 2026 02:00:59 +0000 (0:00:02.007) 0:03:28.214 ********* 2026-03-24 02:01:12.128868 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:12.128879 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:12.128890 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:12.128901 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:12.128913 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:12.128924 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:12.128935 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:12.128946 | orchestrator | 2026-03-24 02:01:12.128958 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-24 02:01:12.128969 | orchestrator | Tuesday 24 March 2026 02:01:11 +0000 (0:00:11.112) 0:03:39.326 ********* 2026-03-24 02:01:12.129016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-24 02:01:12.129051 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-24 02:01:12.129075 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-24 02:01:12.129089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-24 02:01:12.129102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-24 02:01:12.129114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-24 02:01:12.129125 | orchestrator | 2026-03-24 02:01:12.129137 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-24 02:01:12.129149 | orchestrator | Tuesday 24 March 2026 02:01:11 +0000 (0:00:00.348) 0:03:39.675 ********* 2026-03-24 02:01:12.129160 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-24 02:01:12.129172 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:12.129183 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-24 02:01:12.129195 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-24 02:01:12.129206 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:01:12.129223 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-24 02:01:12.129235 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:01:12.129247 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:01:12.129258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 02:01:12.129270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 02:01:12.129281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 02:01:12.129293 | orchestrator | 2026-03-24 02:01:12.129305 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-24 02:01:12.129367 | orchestrator | Tuesday 24 March 2026 02:01:12 +0000 (0:00:00.692) 0:03:40.367 ********* 2026-03-24 02:01:12.129379 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-24 02:01:12.129391 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-24 02:01:12.129403 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-24 02:01:12.129414 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-24 02:01:12.129425 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-24 02:01:12.129445 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-24 02:01:21.128177 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-24 02:01:21.128287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-24 02:01:21.128383 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-24 02:01:21.128399 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-24 02:01:21.128410 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-24 02:01:21.128422 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-24 02:01:21.128434 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-24 02:01:21.128445 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-24 02:01:21.128457 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-24 02:01:21.128468 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-24 02:01:21.128480 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-24 02:01:21.128492 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-24 02:01:21.128503 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-24 02:01:21.128514 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-24 02:01:21.128526 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-24 02:01:21.128538 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:21.128550 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-24 02:01:21.128562 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-24 02:01:21.128573 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-24 02:01:21.128585 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-24 02:01:21.128596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-24 02:01:21.128607 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-24 02:01:21.128619 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-24 02:01:21.128630 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-24 02:01:21.128641 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-24 02:01:21.128720 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-24 02:01:21.128737 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-24 02:01:21.128750 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:01:21.128763 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-24 02:01:21.128775 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-24 02:01:21.128804 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-24 02:01:21.128818 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-24 02:01:21.128831 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-24 02:01:21.128844 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-24 02:01:21.128857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-24 02:01:21.128880 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-24 02:01:21.128894 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:01:21.128907 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:01:21.128920 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-24 02:01:21.128933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-24 02:01:21.128946 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-24 02:01:21.128958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-24 02:01:21.128970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-24 02:01:21.129002 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-24 02:01:21.129014 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-24 02:01:21.129025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-24 02:01:21.129036 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-24 02:01:21.129048 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-24 02:01:21.129059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-24 02:01:21.129070 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-24 02:01:21.129081 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-24 02:01:21.129093 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-24 02:01:21.129104 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-24 02:01:21.129115 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-24 02:01:21.129126 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-24 02:01:21.129137 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-24 02:01:21.129149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-24 02:01:21.129160 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-24 02:01:21.129171 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-24 02:01:21.129183 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-24 02:01:21.129194 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-24 02:01:21.129205 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-24 02:01:21.129217 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-24 02:01:21.129228 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-24 02:01:21.129239 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-24 02:01:21.129251 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-24 02:01:21.129262 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-24 02:01:21.129274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-24 02:01:21.129293 | orchestrator | 2026-03-24 02:01:21.129305 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-24 02:01:21.129351 | orchestrator | Tuesday 24 March 2026 02:01:19 +0000 (0:00:06.953) 0:03:47.321 ********* 2026-03-24 02:01:21.129364 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-24 02:01:21.129376 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-24 02:01:21.129387 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-24 02:01:21.129399 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-24 02:01:21.129416 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-24 02:01:21.129428 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-24 02:01:21.129439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-24 02:01:21.129451 | orchestrator | 2026-03-24 02:01:21.129462 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-24 02:01:21.129474 | orchestrator | Tuesday 24 March 2026 02:01:19 +0000 (0:00:00.603) 0:03:47.925 ********* 2026-03-24 02:01:21.129485 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:21.129496 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:21.129508 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:21.129520 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:21.129532 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:01:21.129543 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:01:21.129554 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:21.129566 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:01:21.129577 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-24 02:01:21.129589 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-24 02:01:21.129607 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-24 02:01:34.293185 | orchestrator | 2026-03-24 02:01:34.293437 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-24 02:01:34.293475 | orchestrator | Tuesday 24 March 2026 02:01:21 +0000 (0:00:01.509) 0:03:49.434 ********* 2026-03-24 02:01:34.293495 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:34.293514 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:34.293534 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:34.293552 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:01:34.293570 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:34.293589 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-24 02:01:34.293606 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:01:34.293625 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:01:34.293646 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-24 02:01:34.293665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-24 02:01:34.293684 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-24 02:01:34.293704 | orchestrator | 2026-03-24 02:01:34.293726 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-24 02:01:34.293788 | orchestrator | Tuesday 24 March 2026 02:01:21 +0000 (0:00:00.585) 0:03:50.020 ********* 2026-03-24 02:01:34.293817 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-24 02:01:34.293837 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:34.293855 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-24 02:01:34.293874 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:01:34.293893 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-24 02:01:34.293911 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:01:34.293929 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-24 02:01:34.293946 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:01:34.293966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-24 02:01:34.293984 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-24 02:01:34.294004 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-24 02:01:34.294108 | orchestrator | 2026-03-24 02:01:34.294130 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-24 02:01:34.294151 | orchestrator | Tuesday 24 March 2026 02:01:22 +0000 (0:00:00.549) 0:03:50.570 ********* 2026-03-24 02:01:34.294171 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:34.294191 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:01:34.294211 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:01:34.294230 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:01:34.294245 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:01:34.294257 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:01:34.294268 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:01:34.294279 | orchestrator | 2026-03-24 02:01:34.294291 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-24 02:01:34.294303 | orchestrator | Tuesday 24 March 2026 02:01:22 +0000 (0:00:00.327) 0:03:50.898 ********* 2026-03-24 02:01:34.294314 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:34.294361 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:34.294374 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:34.294386 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:34.294397 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:34.294408 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:34.294420 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:34.294431 | orchestrator | 2026-03-24 02:01:34.294443 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-24 02:01:34.294454 | orchestrator | Tuesday 24 March 2026 02:01:28 +0000 (0:00:05.643) 0:03:56.542 ********* 2026-03-24 02:01:34.294466 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-24 02:01:34.294477 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:34.294489 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-24 02:01:34.294500 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:01:34.294511 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-24 02:01:34.294522 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:01:34.294533 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-24 02:01:34.294545 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-24 02:01:34.294557 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:01:34.294568 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-24 02:01:34.294598 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:01:34.294610 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:01:34.294622 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-24 02:01:34.294633 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:01:34.294656 | orchestrator | 2026-03-24 02:01:34.294668 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-24 02:01:34.294679 | orchestrator | Tuesday 24 March 2026 02:01:28 +0000 (0:00:00.357) 0:03:56.900 ********* 2026-03-24 02:01:34.294691 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-24 02:01:34.294702 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-24 02:01:34.294713 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-24 02:01:34.294747 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-24 02:01:34.294760 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-24 02:01:34.294771 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-24 02:01:34.294782 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-24 02:01:34.294793 | orchestrator | 2026-03-24 02:01:34.294805 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-24 02:01:34.294816 | orchestrator | Tuesday 24 March 2026 02:01:29 +0000 (0:00:01.101) 0:03:58.002 ********* 2026-03-24 02:01:34.294830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:01:34.294845 | orchestrator | 2026-03-24 02:01:34.294856 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-24 02:01:34.294868 | orchestrator | Tuesday 24 March 2026 02:01:30 +0000 (0:00:00.464) 0:03:58.466 ********* 2026-03-24 02:01:34.294879 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:34.294891 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:34.294902 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:34.294914 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:34.294925 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:34.294936 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:34.294947 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:34.294958 | orchestrator | 2026-03-24 02:01:34.294970 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-24 02:01:34.294981 | orchestrator | Tuesday 24 March 2026 02:01:31 +0000 (0:00:01.323) 0:03:59.790 ********* 2026-03-24 02:01:34.294993 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:34.295004 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:34.295015 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:34.295026 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:34.295037 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:34.295049 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:34.295060 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:34.295071 | orchestrator | 2026-03-24 02:01:34.295082 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-24 02:01:34.295094 | orchestrator | Tuesday 24 March 2026 02:01:32 +0000 (0:00:00.599) 0:04:00.390 ********* 2026-03-24 02:01:34.295105 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:34.295117 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:01:34.295128 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:01:34.295139 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:01:34.295151 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:01:34.295162 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:01:34.295173 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:01:34.295185 | orchestrator | 2026-03-24 02:01:34.295196 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-24 02:01:34.295208 | orchestrator | Tuesday 24 March 2026 02:01:32 +0000 (0:00:00.578) 0:04:00.968 ********* 2026-03-24 02:01:34.295219 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:34.295231 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:34.295242 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:34.295253 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:34.295264 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:34.295276 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:34.295287 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:34.295298 | orchestrator | 2026-03-24 02:01:34.295309 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-24 02:01:34.295370 | orchestrator | Tuesday 24 March 2026 02:01:33 +0000 (0:00:00.637) 0:04:01.605 ********* 2026-03-24 02:01:34.295394 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774316249.5956368, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:34.295411 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774316284.0126777, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:34.295423 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774316286.9476905, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:34.295463 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774316283.9696872, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267613 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774316288.900417, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267728 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774316282.2048562, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267745 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774316288.4003375, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267785 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267812 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267825 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267837 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267876 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267890 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267902 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 02:01:39.267923 | orchestrator | 2026-03-24 02:01:39.267937 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-24 02:01:39.267950 | orchestrator | Tuesday 24 March 2026 02:01:34 +0000 (0:00:00.992) 0:04:02.598 ********* 2026-03-24 02:01:39.267962 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:39.267975 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:01:39.267986 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:01:39.267998 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:01:39.268010 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:01:39.268021 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:01:39.268033 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:01:39.268044 | orchestrator | 2026-03-24 02:01:39.268056 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-24 02:01:39.268067 | orchestrator | Tuesday 24 March 2026 02:01:35 +0000 (0:00:01.144) 0:04:03.743 ********* 2026-03-24 02:01:39.268079 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:39.268090 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:01:39.268102 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:01:39.268113 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:01:39.268125 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:01:39.268136 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:01:39.268149 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:01:39.268162 | orchestrator | 2026-03-24 02:01:39.268181 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-24 02:01:39.268196 | orchestrator | Tuesday 24 March 2026 02:01:36 +0000 (0:00:01.175) 0:04:04.918 ********* 2026-03-24 02:01:39.268209 | orchestrator | changed: [testbed-manager] 2026-03-24 02:01:39.268222 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:01:39.268234 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:01:39.268248 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:01:39.268261 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:01:39.268274 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:01:39.268288 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:01:39.268300 | orchestrator | 2026-03-24 02:01:39.268313 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-24 02:01:39.268393 | orchestrator | Tuesday 24 March 2026 02:01:37 +0000 (0:00:01.256) 0:04:06.175 ********* 2026-03-24 02:01:39.268408 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:01:39.268421 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:01:39.268434 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:01:39.268447 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:01:39.268460 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:01:39.268472 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:01:39.268485 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:01:39.268498 | orchestrator | 2026-03-24 02:01:39.268511 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-24 02:01:39.268522 | orchestrator | Tuesday 24 March 2026 02:01:38 +0000 (0:00:00.299) 0:04:06.475 ********* 2026-03-24 02:01:39.268534 | orchestrator | ok: [testbed-manager] 2026-03-24 02:01:39.268546 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:01:39.268558 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:01:39.268569 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:01:39.268580 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:01:39.268592 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:01:39.268603 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:01:39.268614 | orchestrator | 2026-03-24 02:01:39.268626 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-24 02:01:39.268637 | orchestrator | Tuesday 24 March 2026 02:01:38 +0000 (0:00:00.708) 0:04:07.184 ********* 2026-03-24 02:01:39.268651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:01:39.268672 | orchestrator | 2026-03-24 02:01:39.268684 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-24 02:01:39.268703 | orchestrator | Tuesday 24 March 2026 02:01:39 +0000 (0:00:00.393) 0:04:07.577 ********* 2026-03-24 02:03:02.264669 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.264783 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:03:02.264800 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:03:02.264813 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:03:02.264824 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:03:02.264836 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:03:02.264847 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:03:02.264859 | orchestrator | 2026-03-24 02:03:02.264872 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-24 02:03:02.264885 | orchestrator | Tuesday 24 March 2026 02:01:48 +0000 (0:00:09.692) 0:04:17.270 ********* 2026-03-24 02:03:02.264896 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.264908 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:02.264919 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:02.264931 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:02.264943 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:02.264954 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:02.264965 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:02.264977 | orchestrator | 2026-03-24 02:03:02.264989 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-24 02:03:02.265000 | orchestrator | Tuesday 24 March 2026 02:01:50 +0000 (0:00:01.574) 0:04:18.844 ********* 2026-03-24 02:03:02.265012 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.265023 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:02.265035 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:02.265046 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:02.265057 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:02.265069 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:02.265080 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:02.265091 | orchestrator | 2026-03-24 02:03:02.265103 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-24 02:03:02.265115 | orchestrator | Tuesday 24 March 2026 02:01:51 +0000 (0:00:01.189) 0:04:20.034 ********* 2026-03-24 02:03:02.265126 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.265138 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:02.265149 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:02.265160 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:02.265172 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:02.265184 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:02.265196 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:02.265210 | orchestrator | 2026-03-24 02:03:02.265223 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-24 02:03:02.265238 | orchestrator | Tuesday 24 March 2026 02:01:52 +0000 (0:00:00.290) 0:04:20.324 ********* 2026-03-24 02:03:02.265251 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.265264 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:02.265278 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:02.265291 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:02.265305 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:02.265318 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:02.265332 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:02.265346 | orchestrator | 2026-03-24 02:03:02.265412 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-24 02:03:02.265427 | orchestrator | Tuesday 24 March 2026 02:01:52 +0000 (0:00:00.324) 0:04:20.648 ********* 2026-03-24 02:03:02.265440 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.265454 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:02.265467 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:02.265507 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:02.265521 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:02.265534 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:02.265547 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:02.265560 | orchestrator | 2026-03-24 02:03:02.265575 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-24 02:03:02.265588 | orchestrator | Tuesday 24 March 2026 02:01:52 +0000 (0:00:00.284) 0:04:20.933 ********* 2026-03-24 02:03:02.265599 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:02.265611 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:02.265622 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:02.265634 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.265645 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:02.265657 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:02.265668 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:02.265679 | orchestrator | 2026-03-24 02:03:02.265691 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-24 02:03:02.265703 | orchestrator | Tuesday 24 March 2026 02:01:57 +0000 (0:00:04.926) 0:04:25.859 ********* 2026-03-24 02:03:02.265717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:03:02.265731 | orchestrator | 2026-03-24 02:03:02.265743 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-24 02:03:02.265755 | orchestrator | Tuesday 24 March 2026 02:01:57 +0000 (0:00:00.372) 0:04:26.232 ********* 2026-03-24 02:03:02.265766 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-24 02:03:02.265778 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-24 02:03:02.265790 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-24 02:03:02.265801 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-24 02:03:02.265813 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:02.265842 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-24 02:03:02.265854 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-24 02:03:02.265866 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:02.265877 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-24 02:03:02.265906 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-24 02:03:02.265929 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:02.265940 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-24 02:03:02.265952 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-24 02:03:02.265963 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:03:02.265975 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-24 02:03:02.265986 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-24 02:03:02.266075 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:03:02.266090 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:03:02.266101 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-24 02:03:02.266113 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-24 02:03:02.266125 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:03:02.266136 | orchestrator | 2026-03-24 02:03:02.266148 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-24 02:03:02.266160 | orchestrator | Tuesday 24 March 2026 02:01:58 +0000 (0:00:00.373) 0:04:26.606 ********* 2026-03-24 02:03:02.266171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:03:02.266183 | orchestrator | 2026-03-24 02:03:02.266195 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-24 02:03:02.266217 | orchestrator | Tuesday 24 March 2026 02:01:58 +0000 (0:00:00.427) 0:04:27.034 ********* 2026-03-24 02:03:02.266228 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-24 02:03:02.266240 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-24 02:03:02.266252 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:02.266264 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:02.266275 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-24 02:03:02.266287 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-24 02:03:02.266298 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:02.266310 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-24 02:03:02.266321 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:03:02.266333 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-24 02:03:02.266344 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:03:02.266381 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:03:02.266394 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-24 02:03:02.266406 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:03:02.266417 | orchestrator | 2026-03-24 02:03:02.266429 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-24 02:03:02.266441 | orchestrator | Tuesday 24 March 2026 02:01:59 +0000 (0:00:00.312) 0:04:27.346 ********* 2026-03-24 02:03:02.266453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:03:02.266465 | orchestrator | 2026-03-24 02:03:02.266477 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-24 02:03:02.266488 | orchestrator | Tuesday 24 March 2026 02:01:59 +0000 (0:00:00.437) 0:04:27.783 ********* 2026-03-24 02:03:02.266500 | orchestrator | changed: [testbed-manager] 2026-03-24 02:03:02.266511 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:03:02.266523 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:03:02.266535 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:03:02.266552 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:03:02.266564 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:03:02.266575 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:03:02.266587 | orchestrator | 2026-03-24 02:03:02.266598 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-24 02:03:02.266610 | orchestrator | Tuesday 24 March 2026 02:02:35 +0000 (0:00:36.197) 0:05:03.981 ********* 2026-03-24 02:03:02.266622 | orchestrator | changed: [testbed-manager] 2026-03-24 02:03:02.266633 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:03:02.266645 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:03:02.266659 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:03:02.266679 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:03:02.266697 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:03:02.266717 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:03:02.266737 | orchestrator | 2026-03-24 02:03:02.266756 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-24 02:03:02.266775 | orchestrator | Tuesday 24 March 2026 02:02:44 +0000 (0:00:08.975) 0:05:12.956 ********* 2026-03-24 02:03:02.266787 | orchestrator | changed: [testbed-manager] 2026-03-24 02:03:02.266798 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:03:02.266810 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:03:02.266821 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:03:02.266833 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:03:02.266844 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:03:02.266856 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:03:02.266867 | orchestrator | 2026-03-24 02:03:02.266879 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-24 02:03:02.266899 | orchestrator | Tuesday 24 March 2026 02:02:53 +0000 (0:00:08.861) 0:05:21.817 ********* 2026-03-24 02:03:02.266911 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:02.266922 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:02.266934 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:02.266945 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:02.266957 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:02.266968 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:02.266980 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:02.266991 | orchestrator | 2026-03-24 02:03:02.267003 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-24 02:03:02.267014 | orchestrator | Tuesday 24 March 2026 02:02:55 +0000 (0:00:02.062) 0:05:23.880 ********* 2026-03-24 02:03:02.267026 | orchestrator | changed: [testbed-manager] 2026-03-24 02:03:02.267038 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:03:02.267049 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:03:02.267060 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:03:02.267072 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:03:02.267083 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:03:02.267095 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:03:02.267107 | orchestrator | 2026-03-24 02:03:02.267127 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-24 02:03:13.163935 | orchestrator | Tuesday 24 March 2026 02:03:02 +0000 (0:00:06.687) 0:05:30.567 ********* 2026-03-24 02:03:13.164049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:03:13.164067 | orchestrator | 2026-03-24 02:03:13.164080 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-24 02:03:13.164092 | orchestrator | Tuesday 24 March 2026 02:03:02 +0000 (0:00:00.516) 0:05:31.083 ********* 2026-03-24 02:03:13.164103 | orchestrator | changed: [testbed-manager] 2026-03-24 02:03:13.164116 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:03:13.164128 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:03:13.164139 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:03:13.164150 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:03:13.164162 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:03:13.164173 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:03:13.164185 | orchestrator | 2026-03-24 02:03:13.164196 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-24 02:03:13.164208 | orchestrator | Tuesday 24 March 2026 02:03:03 +0000 (0:00:00.707) 0:05:31.791 ********* 2026-03-24 02:03:13.164219 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:13.164231 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:13.164243 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:13.164254 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:13.164265 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:13.164276 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:13.164288 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:13.164299 | orchestrator | 2026-03-24 02:03:13.164311 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-24 02:03:13.164322 | orchestrator | Tuesday 24 March 2026 02:03:05 +0000 (0:00:01.885) 0:05:33.677 ********* 2026-03-24 02:03:13.164334 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:03:13.164345 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:03:13.164410 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:03:13.164433 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:03:13.164453 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:03:13.164474 | orchestrator | changed: [testbed-manager] 2026-03-24 02:03:13.164492 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:03:13.164510 | orchestrator | 2026-03-24 02:03:13.164529 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-24 02:03:13.164549 | orchestrator | Tuesday 24 March 2026 02:03:06 +0000 (0:00:00.791) 0:05:34.468 ********* 2026-03-24 02:03:13.164597 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:13.164610 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:13.164623 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:13.164636 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:03:13.164649 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:03:13.164663 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:03:13.164676 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:03:13.164689 | orchestrator | 2026-03-24 02:03:13.164703 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-24 02:03:13.164716 | orchestrator | Tuesday 24 March 2026 02:03:06 +0000 (0:00:00.238) 0:05:34.707 ********* 2026-03-24 02:03:13.164729 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:13.164742 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:13.164755 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:13.164785 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:03:13.164798 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:03:13.164812 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:03:13.164824 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:03:13.164836 | orchestrator | 2026-03-24 02:03:13.164848 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-24 02:03:13.164859 | orchestrator | Tuesday 24 March 2026 02:03:06 +0000 (0:00:00.383) 0:05:35.090 ********* 2026-03-24 02:03:13.164870 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:13.164882 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:13.164893 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:13.164904 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:13.164915 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:13.164927 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:13.164938 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:13.164949 | orchestrator | 2026-03-24 02:03:13.164960 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-24 02:03:13.164972 | orchestrator | Tuesday 24 March 2026 02:03:07 +0000 (0:00:00.266) 0:05:35.357 ********* 2026-03-24 02:03:13.164983 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:13.164994 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:13.165006 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:13.165017 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:03:13.165032 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:03:13.165051 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:03:13.165069 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:03:13.165087 | orchestrator | 2026-03-24 02:03:13.165106 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-24 02:03:13.165125 | orchestrator | Tuesday 24 March 2026 02:03:07 +0000 (0:00:00.265) 0:05:35.622 ********* 2026-03-24 02:03:13.165143 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:13.165161 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:13.165179 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:13.165196 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:13.165215 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:13.165233 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:13.165252 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:13.165271 | orchestrator | 2026-03-24 02:03:13.165290 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-24 02:03:13.165308 | orchestrator | Tuesday 24 March 2026 02:03:07 +0000 (0:00:00.282) 0:05:35.905 ********* 2026-03-24 02:03:13.165327 | orchestrator | ok: [testbed-manager] =>  2026-03-24 02:03:13.165347 | orchestrator |  docker_version: 5:27.5.1 2026-03-24 02:03:13.165569 | orchestrator | ok: [testbed-node-3] =>  2026-03-24 02:03:13.165594 | orchestrator |  docker_version: 5:27.5.1 2026-03-24 02:03:13.165611 | orchestrator | ok: [testbed-node-4] =>  2026-03-24 02:03:13.165624 | orchestrator |  docker_version: 5:27.5.1 2026-03-24 02:03:13.165635 | orchestrator | ok: [testbed-node-5] =>  2026-03-24 02:03:13.165646 | orchestrator |  docker_version: 5:27.5.1 2026-03-24 02:03:13.165697 | orchestrator | ok: [testbed-node-0] =>  2026-03-24 02:03:13.165710 | orchestrator |  docker_version: 5:27.5.1 2026-03-24 02:03:13.165721 | orchestrator | ok: [testbed-node-1] =>  2026-03-24 02:03:13.165735 | orchestrator |  docker_version: 5:27.5.1 2026-03-24 02:03:13.165754 | orchestrator | ok: [testbed-node-2] =>  2026-03-24 02:03:13.165782 | orchestrator |  docker_version: 5:27.5.1 2026-03-24 02:03:13.165801 | orchestrator | 2026-03-24 02:03:13.165817 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-24 02:03:13.165834 | orchestrator | Tuesday 24 March 2026 02:03:07 +0000 (0:00:00.240) 0:05:36.146 ********* 2026-03-24 02:03:13.165852 | orchestrator | ok: [testbed-manager] =>  2026-03-24 02:03:13.165870 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-24 02:03:13.165885 | orchestrator | ok: [testbed-node-3] =>  2026-03-24 02:03:13.165900 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-24 02:03:13.165916 | orchestrator | ok: [testbed-node-4] =>  2026-03-24 02:03:13.165931 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-24 02:03:13.165945 | orchestrator | ok: [testbed-node-5] =>  2026-03-24 02:03:13.165960 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-24 02:03:13.165975 | orchestrator | ok: [testbed-node-0] =>  2026-03-24 02:03:13.165990 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-24 02:03:13.166005 | orchestrator | ok: [testbed-node-1] =>  2026-03-24 02:03:13.166101 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-24 02:03:13.166160 | orchestrator | ok: [testbed-node-2] =>  2026-03-24 02:03:13.166180 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-24 02:03:13.166197 | orchestrator | 2026-03-24 02:03:13.166214 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-24 02:03:13.166231 | orchestrator | Tuesday 24 March 2026 02:03:08 +0000 (0:00:00.272) 0:05:36.418 ********* 2026-03-24 02:03:13.166242 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:13.166252 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:13.166261 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:13.166271 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:03:13.166281 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:03:13.166292 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:03:13.166302 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:03:13.166312 | orchestrator | 2026-03-24 02:03:13.166322 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-24 02:03:13.166332 | orchestrator | Tuesday 24 March 2026 02:03:08 +0000 (0:00:00.233) 0:05:36.652 ********* 2026-03-24 02:03:13.166342 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:13.166384 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:13.166397 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:13.166407 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:03:13.166417 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:03:13.166427 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:03:13.166437 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:03:13.166447 | orchestrator | 2026-03-24 02:03:13.166473 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-24 02:03:13.166493 | orchestrator | Tuesday 24 March 2026 02:03:08 +0000 (0:00:00.259) 0:05:36.912 ********* 2026-03-24 02:03:13.166506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:03:13.166519 | orchestrator | 2026-03-24 02:03:13.166538 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-24 02:03:13.166549 | orchestrator | Tuesday 24 March 2026 02:03:08 +0000 (0:00:00.395) 0:05:37.308 ********* 2026-03-24 02:03:13.166559 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:13.166570 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:13.166580 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:13.166590 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:13.166600 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:13.166621 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:13.166631 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:13.166641 | orchestrator | 2026-03-24 02:03:13.166651 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-24 02:03:13.166661 | orchestrator | Tuesday 24 March 2026 02:03:09 +0000 (0:00:00.973) 0:05:38.281 ********* 2026-03-24 02:03:13.166671 | orchestrator | ok: [testbed-manager] 2026-03-24 02:03:13.166681 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:03:13.166691 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:03:13.166701 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:03:13.166711 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:03:13.166721 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:03:13.166731 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:03:13.166741 | orchestrator | 2026-03-24 02:03:13.166751 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-24 02:03:13.166762 | orchestrator | Tuesday 24 March 2026 02:03:12 +0000 (0:00:02.832) 0:05:41.113 ********* 2026-03-24 02:03:13.166772 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-24 02:03:13.166783 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-24 02:03:13.166793 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-24 02:03:13.166803 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:03:13.166813 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-24 02:03:13.166823 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-24 02:03:13.166833 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-24 02:03:13.166843 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-24 02:03:13.166853 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-24 02:03:13.166863 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-24 02:03:13.166873 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:03:13.166883 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-24 02:03:13.166893 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-24 02:03:13.166903 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-24 02:03:13.166913 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:03:13.166923 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-24 02:03:13.166946 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-24 02:04:17.222712 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-24 02:04:17.222820 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:17.222834 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-24 02:04:17.222845 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-24 02:04:17.222856 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-24 02:04:17.222866 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:17.222876 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:17.222887 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-24 02:04:17.222898 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-24 02:04:17.222908 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-24 02:04:17.222918 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:17.222929 | orchestrator | 2026-03-24 02:04:17.222940 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-24 02:04:17.222951 | orchestrator | Tuesday 24 March 2026 02:03:13 +0000 (0:00:00.538) 0:05:41.652 ********* 2026-03-24 02:04:17.222962 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.222972 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.222982 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.222992 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223004 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223014 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223047 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223058 | orchestrator | 2026-03-24 02:04:17.223068 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-24 02:04:17.223078 | orchestrator | Tuesday 24 March 2026 02:03:20 +0000 (0:00:07.411) 0:05:49.064 ********* 2026-03-24 02:04:17.223088 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.223099 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223109 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.223119 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223129 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223139 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223150 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.223161 | orchestrator | 2026-03-24 02:04:17.223171 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-24 02:04:17.223181 | orchestrator | Tuesday 24 March 2026 02:03:21 +0000 (0:00:01.059) 0:05:50.123 ********* 2026-03-24 02:04:17.223192 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.223202 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223212 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223222 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223235 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.223246 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.223258 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223269 | orchestrator | 2026-03-24 02:04:17.223281 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-24 02:04:17.223293 | orchestrator | Tuesday 24 March 2026 02:03:31 +0000 (0:00:09.281) 0:05:59.406 ********* 2026-03-24 02:04:17.223304 | orchestrator | changed: [testbed-manager] 2026-03-24 02:04:17.223316 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223328 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.223340 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223351 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223363 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223376 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.223423 | orchestrator | 2026-03-24 02:04:17.223435 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-24 02:04:17.223447 | orchestrator | Tuesday 24 March 2026 02:03:34 +0000 (0:00:03.344) 0:06:02.750 ********* 2026-03-24 02:04:17.223460 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.223472 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223484 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.223496 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223508 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223520 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223532 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.223544 | orchestrator | 2026-03-24 02:04:17.223556 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-24 02:04:17.223568 | orchestrator | Tuesday 24 March 2026 02:03:35 +0000 (0:00:01.386) 0:06:04.137 ********* 2026-03-24 02:04:17.223580 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.223591 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223601 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.223611 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223621 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223632 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223642 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.223652 | orchestrator | 2026-03-24 02:04:17.223662 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-24 02:04:17.223672 | orchestrator | Tuesday 24 March 2026 02:03:37 +0000 (0:00:01.526) 0:06:05.664 ********* 2026-03-24 02:04:17.223682 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:04:17.223693 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:04:17.223703 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:17.223713 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:17.223730 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:17.223741 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:17.223751 | orchestrator | changed: [testbed-manager] 2026-03-24 02:04:17.223761 | orchestrator | 2026-03-24 02:04:17.223771 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-24 02:04:17.223782 | orchestrator | Tuesday 24 March 2026 02:03:37 +0000 (0:00:00.584) 0:06:06.249 ********* 2026-03-24 02:04:17.223792 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.223802 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223812 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223822 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.223832 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223842 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.223853 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223863 | orchestrator | 2026-03-24 02:04:17.223873 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-24 02:04:17.223900 | orchestrator | Tuesday 24 March 2026 02:03:48 +0000 (0:00:10.424) 0:06:16.674 ********* 2026-03-24 02:04:17.223911 | orchestrator | changed: [testbed-manager] 2026-03-24 02:04:17.223921 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.223931 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.223941 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.223952 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.223962 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.223972 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.223982 | orchestrator | 2026-03-24 02:04:17.223993 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-24 02:04:17.224003 | orchestrator | Tuesday 24 March 2026 02:03:49 +0000 (0:00:00.891) 0:06:17.565 ********* 2026-03-24 02:04:17.224013 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.224024 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.224034 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.224044 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.224054 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.224065 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.224075 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.224085 | orchestrator | 2026-03-24 02:04:17.224095 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-24 02:04:17.224106 | orchestrator | Tuesday 24 March 2026 02:03:58 +0000 (0:00:09.751) 0:06:27.317 ********* 2026-03-24 02:04:17.224116 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.224126 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.224137 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.224147 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.224157 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.224167 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.224177 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.224187 | orchestrator | 2026-03-24 02:04:17.224198 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-24 02:04:17.224208 | orchestrator | Tuesday 24 March 2026 02:04:10 +0000 (0:00:11.502) 0:06:38.820 ********* 2026-03-24 02:04:17.224219 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-24 02:04:17.224229 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-24 02:04:17.224239 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-24 02:04:17.224249 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-24 02:04:17.224260 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-24 02:04:17.224270 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-24 02:04:17.224280 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-24 02:04:17.224291 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-24 02:04:17.224301 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-24 02:04:17.224317 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-24 02:04:17.224328 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-24 02:04:17.224440 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-24 02:04:17.224455 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-24 02:04:17.224466 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-24 02:04:17.224476 | orchestrator | 2026-03-24 02:04:17.224487 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-24 02:04:17.224497 | orchestrator | Tuesday 24 March 2026 02:04:11 +0000 (0:00:01.161) 0:06:39.981 ********* 2026-03-24 02:04:17.224512 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:04:17.224523 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:04:17.224533 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:04:17.224543 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:17.224554 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:17.224564 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:17.224574 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:17.224584 | orchestrator | 2026-03-24 02:04:17.224594 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-24 02:04:17.224605 | orchestrator | Tuesday 24 March 2026 02:04:12 +0000 (0:00:00.561) 0:06:40.543 ********* 2026-03-24 02:04:17.224615 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:17.224626 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:17.224636 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:17.224646 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:17.224657 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:17.224667 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:17.224677 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:17.224687 | orchestrator | 2026-03-24 02:04:17.224698 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-24 02:04:17.224709 | orchestrator | Tuesday 24 March 2026 02:04:16 +0000 (0:00:04.071) 0:06:44.614 ********* 2026-03-24 02:04:17.224720 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:04:17.224730 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:04:17.224740 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:04:17.224750 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:17.224761 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:17.224771 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:17.224781 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:17.224791 | orchestrator | 2026-03-24 02:04:17.224816 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-24 02:04:17.224828 | orchestrator | Tuesday 24 March 2026 02:04:16 +0000 (0:00:00.470) 0:06:45.085 ********* 2026-03-24 02:04:17.224848 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-24 02:04:17.224859 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-24 02:04:17.224869 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:04:17.224879 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-24 02:04:17.224889 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-24 02:04:17.224899 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:04:17.224909 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-24 02:04:17.224920 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-24 02:04:17.224930 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:04:17.224948 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-24 02:04:37.249284 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-24 02:04:37.249496 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:37.249523 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-24 02:04:37.249540 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-24 02:04:37.249554 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:37.249597 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-24 02:04:37.249613 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-24 02:04:37.249629 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:37.249646 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-24 02:04:37.249661 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-24 02:04:37.249677 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:37.249693 | orchestrator | 2026-03-24 02:04:37.249712 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-24 02:04:37.249729 | orchestrator | Tuesday 24 March 2026 02:04:17 +0000 (0:00:00.708) 0:06:45.793 ********* 2026-03-24 02:04:37.249745 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:04:37.249762 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:04:37.249777 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:04:37.249793 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:37.249810 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:37.249823 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:37.249837 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:37.249851 | orchestrator | 2026-03-24 02:04:37.249867 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-24 02:04:37.249882 | orchestrator | Tuesday 24 March 2026 02:04:17 +0000 (0:00:00.510) 0:06:46.304 ********* 2026-03-24 02:04:37.249897 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:04:37.249912 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:04:37.249927 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:04:37.249942 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:37.249957 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:37.249973 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:37.249987 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:37.250001 | orchestrator | 2026-03-24 02:04:37.250071 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-24 02:04:37.250089 | orchestrator | Tuesday 24 March 2026 02:04:18 +0000 (0:00:00.503) 0:06:46.807 ********* 2026-03-24 02:04:37.250104 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:04:37.250119 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:04:37.250134 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:04:37.250149 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:04:37.250163 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:04:37.250178 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:04:37.250194 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:04:37.250208 | orchestrator | 2026-03-24 02:04:37.250224 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-24 02:04:37.250240 | orchestrator | Tuesday 24 March 2026 02:04:18 +0000 (0:00:00.498) 0:06:47.305 ********* 2026-03-24 02:04:37.250256 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.250272 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:04:37.250287 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:04:37.250302 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:04:37.250318 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:04:37.250332 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:04:37.250348 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:04:37.250363 | orchestrator | 2026-03-24 02:04:37.250377 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-24 02:04:37.250564 | orchestrator | Tuesday 24 March 2026 02:04:21 +0000 (0:00:02.043) 0:06:49.348 ********* 2026-03-24 02:04:37.250591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:04:37.250604 | orchestrator | 2026-03-24 02:04:37.250613 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-24 02:04:37.250623 | orchestrator | Tuesday 24 March 2026 02:04:21 +0000 (0:00:00.832) 0:06:50.181 ********* 2026-03-24 02:04:37.250654 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.250664 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:37.250673 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:37.250683 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:37.250692 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:37.250701 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:37.250711 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:37.250720 | orchestrator | 2026-03-24 02:04:37.250729 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-24 02:04:37.250739 | orchestrator | Tuesday 24 March 2026 02:04:22 +0000 (0:00:00.863) 0:06:51.044 ********* 2026-03-24 02:04:37.250748 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.250757 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:37.250766 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:37.250776 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:37.250785 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:37.250794 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:37.250803 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:37.250812 | orchestrator | 2026-03-24 02:04:37.250821 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-24 02:04:37.250831 | orchestrator | Tuesday 24 March 2026 02:04:23 +0000 (0:00:00.844) 0:06:51.888 ********* 2026-03-24 02:04:37.250840 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.250849 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:37.250858 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:37.250867 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:37.250876 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:37.250885 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:37.250894 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:37.250903 | orchestrator | 2026-03-24 02:04:37.250913 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-24 02:04:37.250944 | orchestrator | Tuesday 24 March 2026 02:04:25 +0000 (0:00:01.518) 0:06:53.406 ********* 2026-03-24 02:04:37.250953 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:04:37.250963 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:04:37.250972 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:04:37.250981 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:04:37.250990 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:04:37.251000 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:04:37.251009 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:04:37.251018 | orchestrator | 2026-03-24 02:04:37.251028 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-24 02:04:37.251037 | orchestrator | Tuesday 24 March 2026 02:04:26 +0000 (0:00:01.354) 0:06:54.761 ********* 2026-03-24 02:04:37.251046 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.251056 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:37.251065 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:37.251088 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:37.251098 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:37.251116 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:37.251125 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:37.251134 | orchestrator | 2026-03-24 02:04:37.251144 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-24 02:04:37.251153 | orchestrator | Tuesday 24 March 2026 02:04:27 +0000 (0:00:01.350) 0:06:56.111 ********* 2026-03-24 02:04:37.251162 | orchestrator | changed: [testbed-manager] 2026-03-24 02:04:37.251170 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:04:37.251179 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:04:37.251189 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:04:37.251198 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:04:37.251206 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:04:37.251215 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:04:37.251224 | orchestrator | 2026-03-24 02:04:37.251240 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-24 02:04:37.251250 | orchestrator | Tuesday 24 March 2026 02:04:29 +0000 (0:00:01.536) 0:06:57.647 ********* 2026-03-24 02:04:37.251259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:04:37.251268 | orchestrator | 2026-03-24 02:04:37.251277 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-24 02:04:37.251287 | orchestrator | Tuesday 24 March 2026 02:04:30 +0000 (0:00:01.086) 0:06:58.734 ********* 2026-03-24 02:04:37.251296 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.251305 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:04:37.251314 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:04:37.251323 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:04:37.251332 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:04:37.251341 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:04:37.251350 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:04:37.251359 | orchestrator | 2026-03-24 02:04:37.251368 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-24 02:04:37.251377 | orchestrator | Tuesday 24 March 2026 02:04:31 +0000 (0:00:01.347) 0:07:00.081 ********* 2026-03-24 02:04:37.251461 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.251474 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:04:37.251483 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:04:37.251492 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:04:37.251501 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:04:37.251523 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:04:37.251533 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:04:37.251542 | orchestrator | 2026-03-24 02:04:37.251552 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-24 02:04:37.251561 | orchestrator | Tuesday 24 March 2026 02:04:33 +0000 (0:00:01.919) 0:07:02.000 ********* 2026-03-24 02:04:37.251570 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.251579 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:04:37.251588 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:04:37.251597 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:04:37.251606 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:04:37.251616 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:04:37.251625 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:04:37.251634 | orchestrator | 2026-03-24 02:04:37.251643 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-24 02:04:37.251652 | orchestrator | Tuesday 24 March 2026 02:04:34 +0000 (0:00:01.096) 0:07:03.096 ********* 2026-03-24 02:04:37.251662 | orchestrator | ok: [testbed-manager] 2026-03-24 02:04:37.251671 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:04:37.251680 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:04:37.251689 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:04:37.251698 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:04:37.251707 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:04:37.251716 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:04:37.251725 | orchestrator | 2026-03-24 02:04:37.251735 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-24 02:04:37.251744 | orchestrator | Tuesday 24 March 2026 02:04:36 +0000 (0:00:01.325) 0:07:04.422 ********* 2026-03-24 02:04:37.251753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:04:37.251763 | orchestrator | 2026-03-24 02:04:37.251772 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-24 02:04:37.251781 | orchestrator | Tuesday 24 March 2026 02:04:36 +0000 (0:00:00.846) 0:07:05.268 ********* 2026-03-24 02:04:37.251790 | orchestrator | 2026-03-24 02:04:37.251800 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-24 02:04:37.251815 | orchestrator | Tuesday 24 March 2026 02:04:36 +0000 (0:00:00.037) 0:07:05.306 ********* 2026-03-24 02:04:37.251825 | orchestrator | 2026-03-24 02:04:37.251834 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-24 02:04:37.251843 | orchestrator | Tuesday 24 March 2026 02:04:37 +0000 (0:00:00.036) 0:07:05.343 ********* 2026-03-24 02:04:37.251852 | orchestrator | 2026-03-24 02:04:37.251861 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-24 02:04:37.251878 | orchestrator | Tuesday 24 March 2026 02:04:37 +0000 (0:00:00.042) 0:07:05.385 ********* 2026-03-24 02:05:02.724447 | orchestrator | 2026-03-24 02:05:02.724566 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-24 02:05:02.724584 | orchestrator | Tuesday 24 March 2026 02:04:37 +0000 (0:00:00.039) 0:07:05.425 ********* 2026-03-24 02:05:02.724597 | orchestrator | 2026-03-24 02:05:02.724609 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-24 02:05:02.724621 | orchestrator | Tuesday 24 March 2026 02:04:37 +0000 (0:00:00.039) 0:07:05.464 ********* 2026-03-24 02:05:02.724633 | orchestrator | 2026-03-24 02:05:02.724645 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-24 02:05:02.724657 | orchestrator | Tuesday 24 March 2026 02:04:37 +0000 (0:00:00.043) 0:07:05.508 ********* 2026-03-24 02:05:02.724668 | orchestrator | 2026-03-24 02:05:02.724680 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-24 02:05:02.724691 | orchestrator | Tuesday 24 March 2026 02:04:37 +0000 (0:00:00.037) 0:07:05.545 ********* 2026-03-24 02:05:02.724703 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:02.724716 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:02.724727 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:02.724739 | orchestrator | 2026-03-24 02:05:02.724750 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-24 02:05:02.724762 | orchestrator | Tuesday 24 March 2026 02:04:38 +0000 (0:00:01.211) 0:07:06.757 ********* 2026-03-24 02:05:02.724773 | orchestrator | changed: [testbed-manager] 2026-03-24 02:05:02.724785 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:02.724797 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:02.724808 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:02.724820 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:02.724831 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:02.724843 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:02.724854 | orchestrator | 2026-03-24 02:05:02.724866 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-24 02:05:02.724877 | orchestrator | Tuesday 24 March 2026 02:04:39 +0000 (0:00:01.412) 0:07:08.169 ********* 2026-03-24 02:05:02.724889 | orchestrator | changed: [testbed-manager] 2026-03-24 02:05:02.724900 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:02.724912 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:02.724923 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:02.724934 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:02.724946 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:02.724960 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:02.724974 | orchestrator | 2026-03-24 02:05:02.724987 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-24 02:05:02.725001 | orchestrator | Tuesday 24 March 2026 02:04:41 +0000 (0:00:01.158) 0:07:09.327 ********* 2026-03-24 02:05:02.725014 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:02.725028 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:02.725041 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:02.725055 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:02.725070 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:02.725085 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:02.725097 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:02.725108 | orchestrator | 2026-03-24 02:05:02.725120 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-24 02:05:02.725131 | orchestrator | Tuesday 24 March 2026 02:04:43 +0000 (0:00:02.402) 0:07:11.729 ********* 2026-03-24 02:05:02.725182 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:02.725195 | orchestrator | 2026-03-24 02:05:02.725207 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-24 02:05:02.725219 | orchestrator | Tuesday 24 March 2026 02:04:43 +0000 (0:00:00.109) 0:07:11.839 ********* 2026-03-24 02:05:02.725230 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:02.725241 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:02.725268 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:02.725280 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:02.725302 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:02.725313 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:02.725324 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:02.725335 | orchestrator | 2026-03-24 02:05:02.725347 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-24 02:05:02.725359 | orchestrator | Tuesday 24 March 2026 02:04:44 +0000 (0:00:01.029) 0:07:12.868 ********* 2026-03-24 02:05:02.725371 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:02.725382 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:02.725393 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:05:02.725422 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:05:02.725434 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:05:02.725444 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:05:02.725456 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:05:02.725467 | orchestrator | 2026-03-24 02:05:02.725478 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-24 02:05:02.725490 | orchestrator | Tuesday 24 March 2026 02:04:45 +0000 (0:00:00.505) 0:07:13.374 ********* 2026-03-24 02:05:02.725502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:05:02.725516 | orchestrator | 2026-03-24 02:05:02.725528 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-24 02:05:02.725539 | orchestrator | Tuesday 24 March 2026 02:04:46 +0000 (0:00:00.973) 0:07:14.347 ********* 2026-03-24 02:05:02.725550 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:02.725562 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:02.725573 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:02.725585 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:02.725596 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:02.725608 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:02.725620 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:02.725631 | orchestrator | 2026-03-24 02:05:02.725643 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-24 02:05:02.725654 | orchestrator | Tuesday 24 March 2026 02:04:46 +0000 (0:00:00.760) 0:07:15.108 ********* 2026-03-24 02:05:02.725666 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-24 02:05:02.725694 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-24 02:05:02.725708 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-24 02:05:02.725719 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-24 02:05:02.725731 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-24 02:05:02.725742 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-24 02:05:02.725754 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-24 02:05:02.725765 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-24 02:05:02.725777 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-24 02:05:02.725788 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-24 02:05:02.725800 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-24 02:05:02.725811 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-24 02:05:02.725832 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-24 02:05:02.725843 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-24 02:05:02.725855 | orchestrator | 2026-03-24 02:05:02.725866 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-24 02:05:02.725878 | orchestrator | Tuesday 24 March 2026 02:04:49 +0000 (0:00:02.279) 0:07:17.387 ********* 2026-03-24 02:05:02.725889 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:02.725901 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:02.725912 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:05:02.725923 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:05:02.725934 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:05:02.725945 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:05:02.725957 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:05:02.725968 | orchestrator | 2026-03-24 02:05:02.725980 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-24 02:05:02.725991 | orchestrator | Tuesday 24 March 2026 02:04:49 +0000 (0:00:00.636) 0:07:18.024 ********* 2026-03-24 02:05:02.726005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:05:02.726095 | orchestrator | 2026-03-24 02:05:02.726108 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-24 02:05:02.726119 | orchestrator | Tuesday 24 March 2026 02:04:50 +0000 (0:00:00.755) 0:07:18.780 ********* 2026-03-24 02:05:02.726130 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:02.726142 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:02.726161 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:02.726172 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:02.726184 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:02.726196 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:02.726207 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:02.726218 | orchestrator | 2026-03-24 02:05:02.726230 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-24 02:05:02.726241 | orchestrator | Tuesday 24 March 2026 02:04:51 +0000 (0:00:00.841) 0:07:19.621 ********* 2026-03-24 02:05:02.726260 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:02.726272 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:02.726283 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:02.726294 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:02.726306 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:02.726317 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:02.726328 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:02.726340 | orchestrator | 2026-03-24 02:05:02.726351 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-24 02:05:02.726363 | orchestrator | Tuesday 24 March 2026 02:04:52 +0000 (0:00:00.957) 0:07:20.578 ********* 2026-03-24 02:05:02.726374 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:02.726385 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:02.726416 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:05:02.726429 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:05:02.726440 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:05:02.726452 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:05:02.726463 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:05:02.726474 | orchestrator | 2026-03-24 02:05:02.726486 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-24 02:05:02.726497 | orchestrator | Tuesday 24 March 2026 02:04:52 +0000 (0:00:00.472) 0:07:21.051 ********* 2026-03-24 02:05:02.726508 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:02.726520 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:02.726531 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:02.726543 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:02.726554 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:02.726574 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:02.726585 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:02.726596 | orchestrator | 2026-03-24 02:05:02.726608 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-24 02:05:02.726619 | orchestrator | Tuesday 24 March 2026 02:04:54 +0000 (0:00:01.502) 0:07:22.554 ********* 2026-03-24 02:05:02.726630 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:02.726642 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:02.726653 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:05:02.726665 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:05:02.726676 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:05:02.726687 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:05:02.726698 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:05:02.726710 | orchestrator | 2026-03-24 02:05:02.726721 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-24 02:05:02.726733 | orchestrator | Tuesday 24 March 2026 02:04:54 +0000 (0:00:00.476) 0:07:23.030 ********* 2026-03-24 02:05:02.726744 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:02.726756 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:02.726767 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:02.726778 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:02.726790 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:02.726801 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:02.726821 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:34.412836 | orchestrator | 2026-03-24 02:05:34.413010 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-24 02:05:34.413042 | orchestrator | Tuesday 24 March 2026 02:05:02 +0000 (0:00:07.995) 0:07:31.025 ********* 2026-03-24 02:05:34.413113 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.413129 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:34.413143 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:34.413155 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:34.413166 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:34.413178 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:34.413189 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:34.413201 | orchestrator | 2026-03-24 02:05:34.413213 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-24 02:05:34.413225 | orchestrator | Tuesday 24 March 2026 02:05:04 +0000 (0:00:01.492) 0:07:32.518 ********* 2026-03-24 02:05:34.413237 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.413248 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:34.413260 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:34.413275 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:34.413294 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:34.413313 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:34.413340 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:34.413363 | orchestrator | 2026-03-24 02:05:34.413382 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-24 02:05:34.413401 | orchestrator | Tuesday 24 March 2026 02:05:05 +0000 (0:00:01.700) 0:07:34.218 ********* 2026-03-24 02:05:34.413449 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.413469 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:34.413488 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:34.413507 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:34.413524 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:34.413543 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:34.413561 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:34.413581 | orchestrator | 2026-03-24 02:05:34.413601 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-24 02:05:34.413620 | orchestrator | Tuesday 24 March 2026 02:05:07 +0000 (0:00:01.639) 0:07:35.857 ********* 2026-03-24 02:05:34.413640 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.413657 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.413673 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.413723 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.413741 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.413758 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.413774 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.413791 | orchestrator | 2026-03-24 02:05:34.413809 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-24 02:05:34.413822 | orchestrator | Tuesday 24 March 2026 02:05:08 +0000 (0:00:00.831) 0:07:36.689 ********* 2026-03-24 02:05:34.413833 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:34.413845 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:34.413856 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:05:34.413868 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:05:34.413879 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:05:34.413890 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:05:34.413902 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:05:34.413913 | orchestrator | 2026-03-24 02:05:34.413925 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-24 02:05:34.413937 | orchestrator | Tuesday 24 March 2026 02:05:09 +0000 (0:00:00.904) 0:07:37.594 ********* 2026-03-24 02:05:34.413949 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:34.413960 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:34.413971 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:05:34.413982 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:05:34.413994 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:05:34.414005 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:05:34.414063 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:05:34.414079 | orchestrator | 2026-03-24 02:05:34.414099 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-24 02:05:34.414117 | orchestrator | Tuesday 24 March 2026 02:05:09 +0000 (0:00:00.473) 0:07:38.067 ********* 2026-03-24 02:05:34.414136 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.414175 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.414193 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.414211 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.414228 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.414246 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.414263 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.414279 | orchestrator | 2026-03-24 02:05:34.414298 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-24 02:05:34.414316 | orchestrator | Tuesday 24 March 2026 02:05:10 +0000 (0:00:00.466) 0:07:38.533 ********* 2026-03-24 02:05:34.414334 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.414351 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.414369 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.414389 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.414439 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.414459 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.414478 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.414497 | orchestrator | 2026-03-24 02:05:34.414515 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-24 02:05:34.414534 | orchestrator | Tuesday 24 March 2026 02:05:10 +0000 (0:00:00.484) 0:07:39.018 ********* 2026-03-24 02:05:34.414554 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.414575 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.414593 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.414611 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.414623 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.414634 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.414646 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.414657 | orchestrator | 2026-03-24 02:05:34.414668 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-24 02:05:34.414680 | orchestrator | Tuesday 24 March 2026 02:05:11 +0000 (0:00:00.637) 0:07:39.655 ********* 2026-03-24 02:05:34.414691 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.414702 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.414729 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.414740 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.414752 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.414763 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.414774 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.414808 | orchestrator | 2026-03-24 02:05:34.414859 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-24 02:05:34.414872 | orchestrator | Tuesday 24 March 2026 02:05:16 +0000 (0:00:05.314) 0:07:44.970 ********* 2026-03-24 02:05:34.414884 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:05:34.414895 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:05:34.414906 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:05:34.414917 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:05:34.414928 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:05:34.414940 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:05:34.414951 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:05:34.414962 | orchestrator | 2026-03-24 02:05:34.414973 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-24 02:05:34.414985 | orchestrator | Tuesday 24 March 2026 02:05:17 +0000 (0:00:00.497) 0:07:45.467 ********* 2026-03-24 02:05:34.414998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:05:34.415013 | orchestrator | 2026-03-24 02:05:34.415024 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-24 02:05:34.415035 | orchestrator | Tuesday 24 March 2026 02:05:18 +0000 (0:00:00.949) 0:07:46.417 ********* 2026-03-24 02:05:34.415046 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.415058 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.415069 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.415080 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.415091 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.415102 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.415113 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.415125 | orchestrator | 2026-03-24 02:05:34.415136 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-24 02:05:34.415147 | orchestrator | Tuesday 24 March 2026 02:05:20 +0000 (0:00:01.902) 0:07:48.319 ********* 2026-03-24 02:05:34.415158 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.415169 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.415181 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.415192 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.415203 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.415214 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.415225 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.415236 | orchestrator | 2026-03-24 02:05:34.415247 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-24 02:05:34.415258 | orchestrator | Tuesday 24 March 2026 02:05:21 +0000 (0:00:01.147) 0:07:49.467 ********* 2026-03-24 02:05:34.415269 | orchestrator | ok: [testbed-manager] 2026-03-24 02:05:34.415281 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:05:34.415292 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:05:34.415303 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:05:34.415314 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:05:34.415325 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:05:34.415338 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:05:34.415357 | orchestrator | 2026-03-24 02:05:34.415376 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-24 02:05:34.415394 | orchestrator | Tuesday 24 March 2026 02:05:21 +0000 (0:00:00.851) 0:07:50.318 ********* 2026-03-24 02:05:34.415489 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-24 02:05:34.415513 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-24 02:05:34.415547 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-24 02:05:34.415566 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-24 02:05:34.415585 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-24 02:05:34.415605 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-24 02:05:34.415625 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-24 02:05:34.415644 | orchestrator | 2026-03-24 02:05:34.415664 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-24 02:05:34.415682 | orchestrator | Tuesday 24 March 2026 02:05:23 +0000 (0:00:01.825) 0:07:52.144 ********* 2026-03-24 02:05:34.415694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:05:34.415706 | orchestrator | 2026-03-24 02:05:34.415717 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-24 02:05:34.415729 | orchestrator | Tuesday 24 March 2026 02:05:24 +0000 (0:00:00.780) 0:07:52.924 ********* 2026-03-24 02:05:34.415740 | orchestrator | changed: [testbed-manager] 2026-03-24 02:05:34.415751 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:05:34.415763 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:05:34.415774 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:05:34.415786 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:05:34.415797 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:05:34.415808 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:05:34.415820 | orchestrator | 2026-03-24 02:05:34.415843 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-24 02:06:06.119783 | orchestrator | Tuesday 24 March 2026 02:05:34 +0000 (0:00:09.791) 0:08:02.716 ********* 2026-03-24 02:06:06.119873 | orchestrator | ok: [testbed-manager] 2026-03-24 02:06:06.119883 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:06:06.119890 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:06:06.119899 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:06:06.119909 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:06:06.119918 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:06:06.119928 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:06:06.119937 | orchestrator | 2026-03-24 02:06:06.119948 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-24 02:06:06.119957 | orchestrator | Tuesday 24 March 2026 02:05:36 +0000 (0:00:01.898) 0:08:04.615 ********* 2026-03-24 02:06:06.119967 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:06:06.119977 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:06:06.119987 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:06:06.119996 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:06:06.120006 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:06:06.120017 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:06:06.120031 | orchestrator | 2026-03-24 02:06:06.120041 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-24 02:06:06.120047 | orchestrator | Tuesday 24 March 2026 02:05:37 +0000 (0:00:01.320) 0:08:05.936 ********* 2026-03-24 02:06:06.120053 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120060 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.120066 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.120072 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.120078 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.120100 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.120106 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.120112 | orchestrator | 2026-03-24 02:06:06.120118 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-24 02:06:06.120124 | orchestrator | 2026-03-24 02:06:06.120130 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-24 02:06:06.120135 | orchestrator | Tuesday 24 March 2026 02:05:38 +0000 (0:00:01.249) 0:08:07.185 ********* 2026-03-24 02:06:06.120141 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:06:06.120147 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:06:06.120152 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:06:06.120158 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:06:06.120164 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:06:06.120170 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:06:06.120175 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:06:06.120181 | orchestrator | 2026-03-24 02:06:06.120187 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-24 02:06:06.120192 | orchestrator | 2026-03-24 02:06:06.120199 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-24 02:06:06.120204 | orchestrator | Tuesday 24 March 2026 02:05:39 +0000 (0:00:00.642) 0:08:07.827 ********* 2026-03-24 02:06:06.120210 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120216 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.120223 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.120232 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.120241 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.120251 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.120260 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.120268 | orchestrator | 2026-03-24 02:06:06.120276 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-24 02:06:06.120299 | orchestrator | Tuesday 24 March 2026 02:05:40 +0000 (0:00:01.475) 0:08:09.303 ********* 2026-03-24 02:06:06.120310 | orchestrator | ok: [testbed-manager] 2026-03-24 02:06:06.120319 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:06:06.120328 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:06:06.120337 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:06:06.120344 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:06:06.120350 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:06:06.120357 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:06:06.120364 | orchestrator | 2026-03-24 02:06:06.120370 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-24 02:06:06.120377 | orchestrator | Tuesday 24 March 2026 02:05:42 +0000 (0:00:01.432) 0:08:10.736 ********* 2026-03-24 02:06:06.120383 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:06:06.120390 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:06:06.120397 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:06:06.120403 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:06:06.120409 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:06:06.120445 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:06:06.120452 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:06:06.120459 | orchestrator | 2026-03-24 02:06:06.120465 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-24 02:06:06.120472 | orchestrator | Tuesday 24 March 2026 02:05:42 +0000 (0:00:00.449) 0:08:11.185 ********* 2026-03-24 02:06:06.120480 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:06:06.120488 | orchestrator | 2026-03-24 02:06:06.120495 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-24 02:06:06.120502 | orchestrator | Tuesday 24 March 2026 02:05:43 +0000 (0:00:00.920) 0:08:12.106 ********* 2026-03-24 02:06:06.120510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:06:06.120525 | orchestrator | 2026-03-24 02:06:06.120532 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-24 02:06:06.120540 | orchestrator | Tuesday 24 March 2026 02:05:44 +0000 (0:00:00.766) 0:08:12.873 ********* 2026-03-24 02:06:06.120549 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120558 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.120572 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.120582 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.120591 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.120599 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.120607 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.120616 | orchestrator | 2026-03-24 02:06:06.120641 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-24 02:06:06.120650 | orchestrator | Tuesday 24 March 2026 02:05:54 +0000 (0:00:09.673) 0:08:22.546 ********* 2026-03-24 02:06:06.120658 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120666 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.120675 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.120684 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.120693 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.120702 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.120712 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.120721 | orchestrator | 2026-03-24 02:06:06.120730 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-24 02:06:06.120738 | orchestrator | Tuesday 24 March 2026 02:05:55 +0000 (0:00:01.024) 0:08:23.570 ********* 2026-03-24 02:06:06.120744 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120750 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.120755 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.120761 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.120766 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.120772 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.120778 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.120783 | orchestrator | 2026-03-24 02:06:06.120791 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-24 02:06:06.120800 | orchestrator | Tuesday 24 March 2026 02:05:56 +0000 (0:00:01.363) 0:08:24.933 ********* 2026-03-24 02:06:06.120809 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.120818 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.120826 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.120834 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.120843 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.120852 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.120861 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120871 | orchestrator | 2026-03-24 02:06:06.120880 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-24 02:06:06.120890 | orchestrator | Tuesday 24 March 2026 02:05:59 +0000 (0:00:02.387) 0:08:27.321 ********* 2026-03-24 02:06:06.120899 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120908 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.120918 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.120928 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.120937 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.120946 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.120955 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.120963 | orchestrator | 2026-03-24 02:06:06.120971 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-24 02:06:06.120978 | orchestrator | Tuesday 24 March 2026 02:06:00 +0000 (0:00:01.273) 0:08:28.594 ********* 2026-03-24 02:06:06.120986 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.120994 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.121011 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.121021 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.121030 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.121038 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.121046 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.121052 | orchestrator | 2026-03-24 02:06:06.121058 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-24 02:06:06.121064 | orchestrator | 2026-03-24 02:06:06.121079 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-24 02:06:06.121088 | orchestrator | Tuesday 24 March 2026 02:06:01 +0000 (0:00:01.146) 0:08:29.741 ********* 2026-03-24 02:06:06.121098 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:06:06.121108 | orchestrator | 2026-03-24 02:06:06.121117 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-24 02:06:06.121126 | orchestrator | Tuesday 24 March 2026 02:06:02 +0000 (0:00:00.754) 0:08:30.496 ********* 2026-03-24 02:06:06.121136 | orchestrator | ok: [testbed-manager] 2026-03-24 02:06:06.121146 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:06:06.121155 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:06:06.121165 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:06:06.121173 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:06:06.121184 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:06:06.121190 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:06:06.121196 | orchestrator | 2026-03-24 02:06:06.121202 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-24 02:06:06.121207 | orchestrator | Tuesday 24 March 2026 02:06:03 +0000 (0:00:01.022) 0:08:31.518 ********* 2026-03-24 02:06:06.121213 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:06.121219 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:06.121225 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:06.121235 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:06.121245 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:06.121258 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:06.121267 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:06.121276 | orchestrator | 2026-03-24 02:06:06.121284 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-24 02:06:06.121293 | orchestrator | Tuesday 24 March 2026 02:06:04 +0000 (0:00:01.137) 0:08:32.655 ********* 2026-03-24 02:06:06.121303 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:06:06.121312 | orchestrator | 2026-03-24 02:06:06.121320 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-24 02:06:06.121330 | orchestrator | Tuesday 24 March 2026 02:06:05 +0000 (0:00:00.935) 0:08:33.591 ********* 2026-03-24 02:06:06.121336 | orchestrator | ok: [testbed-manager] 2026-03-24 02:06:06.121342 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:06:06.121347 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:06:06.121353 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:06:06.121358 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:06:06.121364 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:06:06.121370 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:06:06.121375 | orchestrator | 2026-03-24 02:06:06.121394 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-24 02:06:07.673291 | orchestrator | Tuesday 24 March 2026 02:06:06 +0000 (0:00:00.826) 0:08:34.418 ********* 2026-03-24 02:06:07.673366 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:07.673374 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:07.673379 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:07.673383 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:07.673387 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:07.673392 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:07.673396 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:07.673435 | orchestrator | 2026-03-24 02:06:07.673441 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:06:07.673446 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-24 02:06:07.673452 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-24 02:06:07.673456 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-24 02:06:07.673460 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-24 02:06:07.673464 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-24 02:06:07.673468 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-24 02:06:07.673472 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-24 02:06:07.673476 | orchestrator | 2026-03-24 02:06:07.673480 | orchestrator | 2026-03-24 02:06:07.673484 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:06:07.673488 | orchestrator | Tuesday 24 March 2026 02:06:07 +0000 (0:00:01.162) 0:08:35.580 ********* 2026-03-24 02:06:07.673492 | orchestrator | =============================================================================== 2026-03-24 02:06:07.673496 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.98s 2026-03-24 02:06:07.673500 | orchestrator | osism.commons.packages : Download required packages -------------------- 54.98s 2026-03-24 02:06:07.673504 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 36.20s 2026-03-24 02:06:07.673508 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.23s 2026-03-24 02:06:07.673512 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.50s 2026-03-24 02:06:07.673526 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.11s 2026-03-24 02:06:07.673531 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.42s 2026-03-24 02:06:07.673535 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.33s 2026-03-24 02:06:07.673539 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.79s 2026-03-24 02:06:07.673543 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.75s 2026-03-24 02:06:07.673547 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.69s 2026-03-24 02:06:07.673551 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.67s 2026-03-24 02:06:07.673555 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.28s 2026-03-24 02:06:07.673559 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.98s 2026-03-24 02:06:07.673563 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.86s 2026-03-24 02:06:07.673567 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.00s 2026-03-24 02:06:07.673571 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.41s 2026-03-24 02:06:07.673575 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.95s 2026-03-24 02:06:07.673579 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.69s 2026-03-24 02:06:07.673583 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.64s 2026-03-24 02:06:07.923966 | orchestrator | + osism apply fail2ban 2026-03-24 02:06:20.354935 | orchestrator | 2026-03-24 02:06:20 | INFO  | Task 2b1393a5-bb91-4ed1-a4cf-0d1020c427fe (fail2ban) was prepared for execution. 2026-03-24 02:06:20.355050 | orchestrator | 2026-03-24 02:06:20 | INFO  | It takes a moment until task 2b1393a5-bb91-4ed1-a4cf-0d1020c427fe (fail2ban) has been started and output is visible here. 2026-03-24 02:06:42.106576 | orchestrator | 2026-03-24 02:06:42.106701 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-24 02:06:42.106726 | orchestrator | 2026-03-24 02:06:42.106745 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-24 02:06:42.106763 | orchestrator | Tuesday 24 March 2026 02:06:24 +0000 (0:00:00.276) 0:00:00.276 ********* 2026-03-24 02:06:42.106782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:06:42.106802 | orchestrator | 2026-03-24 02:06:42.106820 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-24 02:06:42.106839 | orchestrator | Tuesday 24 March 2026 02:06:25 +0000 (0:00:01.167) 0:00:01.443 ********* 2026-03-24 02:06:42.106859 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:42.106880 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:42.106900 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:42.106920 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:42.106964 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:42.106997 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:42.107016 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:42.107038 | orchestrator | 2026-03-24 02:06:42.107059 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-24 02:06:42.107079 | orchestrator | Tuesday 24 March 2026 02:06:37 +0000 (0:00:11.416) 0:00:12.859 ********* 2026-03-24 02:06:42.107096 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:42.107109 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:42.107122 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:42.107136 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:42.107149 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:42.107162 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:42.107175 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:42.107187 | orchestrator | 2026-03-24 02:06:42.107201 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-24 02:06:42.107214 | orchestrator | Tuesday 24 March 2026 02:06:38 +0000 (0:00:01.407) 0:00:14.267 ********* 2026-03-24 02:06:42.107227 | orchestrator | ok: [testbed-manager] 2026-03-24 02:06:42.107241 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:06:42.107253 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:06:42.107266 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:06:42.107280 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:06:42.107293 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:06:42.107304 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:06:42.107315 | orchestrator | 2026-03-24 02:06:42.107327 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-24 02:06:42.107338 | orchestrator | Tuesday 24 March 2026 02:06:40 +0000 (0:00:01.471) 0:00:15.739 ********* 2026-03-24 02:06:42.107350 | orchestrator | changed: [testbed-manager] 2026-03-24 02:06:42.107361 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:06:42.107373 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:06:42.107384 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:06:42.107396 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:06:42.107407 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:06:42.107450 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:06:42.107463 | orchestrator | 2026-03-24 02:06:42.107474 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:06:42.107486 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:06:42.107528 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:06:42.107541 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:06:42.107553 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:06:42.107565 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:06:42.107576 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:06:42.107588 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:06:42.107599 | orchestrator | 2026-03-24 02:06:42.107611 | orchestrator | 2026-03-24 02:06:42.107622 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:06:42.107634 | orchestrator | Tuesday 24 March 2026 02:06:41 +0000 (0:00:01.617) 0:00:17.356 ********* 2026-03-24 02:06:42.107645 | orchestrator | =============================================================================== 2026-03-24 02:06:42.107657 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.42s 2026-03-24 02:06:42.107668 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-03-24 02:06:42.107679 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.47s 2026-03-24 02:06:42.107690 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.41s 2026-03-24 02:06:42.107701 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.17s 2026-03-24 02:06:42.365334 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-24 02:06:42.365516 | orchestrator | + osism apply network 2026-03-24 02:06:54.339279 | orchestrator | 2026-03-24 02:06:54 | INFO  | Task b8b6dd8c-6454-49aa-89a2-b0d4357f6aa2 (network) was prepared for execution. 2026-03-24 02:06:54.339369 | orchestrator | 2026-03-24 02:06:54 | INFO  | It takes a moment until task b8b6dd8c-6454-49aa-89a2-b0d4357f6aa2 (network) has been started and output is visible here. 2026-03-24 02:07:22.084638 | orchestrator | 2026-03-24 02:07:22.084750 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-24 02:07:22.084766 | orchestrator | 2026-03-24 02:07:22.084775 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-24 02:07:22.084784 | orchestrator | Tuesday 24 March 2026 02:06:58 +0000 (0:00:00.186) 0:00:00.186 ********* 2026-03-24 02:07:22.084793 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.084802 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:22.084810 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:22.084818 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:22.084826 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:22.084835 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:22.084843 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:22.084851 | orchestrator | 2026-03-24 02:07:22.084859 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-24 02:07:22.084868 | orchestrator | Tuesday 24 March 2026 02:06:58 +0000 (0:00:00.525) 0:00:00.712 ********* 2026-03-24 02:07:22.084878 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:07:22.084888 | orchestrator | 2026-03-24 02:07:22.084896 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-24 02:07:22.084927 | orchestrator | Tuesday 24 March 2026 02:06:59 +0000 (0:00:01.107) 0:00:01.819 ********* 2026-03-24 02:07:22.084935 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.084943 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:22.084951 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:22.084959 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:22.084966 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:22.084974 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:22.084981 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:22.084989 | orchestrator | 2026-03-24 02:07:22.084997 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-24 02:07:22.085005 | orchestrator | Tuesday 24 March 2026 02:07:02 +0000 (0:00:02.253) 0:00:04.072 ********* 2026-03-24 02:07:22.085013 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.085021 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:22.085029 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:22.085037 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:22.085044 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:22.085052 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:22.085060 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:22.085067 | orchestrator | 2026-03-24 02:07:22.085075 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-24 02:07:22.085083 | orchestrator | Tuesday 24 March 2026 02:07:03 +0000 (0:00:01.702) 0:00:05.775 ********* 2026-03-24 02:07:22.085091 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-24 02:07:22.085100 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-24 02:07:22.085107 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-24 02:07:22.085115 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-24 02:07:22.085122 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-24 02:07:22.085130 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-24 02:07:22.085138 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-24 02:07:22.085145 | orchestrator | 2026-03-24 02:07:22.085168 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-24 02:07:22.085181 | orchestrator | Tuesday 24 March 2026 02:07:04 +0000 (0:00:00.996) 0:00:06.771 ********* 2026-03-24 02:07:22.085189 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 02:07:22.085199 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 02:07:22.085207 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 02:07:22.085216 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 02:07:22.085225 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:07:22.085234 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 02:07:22.085243 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 02:07:22.085253 | orchestrator | 2026-03-24 02:07:22.085262 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-24 02:07:22.085270 | orchestrator | Tuesday 24 March 2026 02:07:08 +0000 (0:00:03.498) 0:00:10.270 ********* 2026-03-24 02:07:22.085278 | orchestrator | changed: [testbed-manager] 2026-03-24 02:07:22.085287 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:07:22.085294 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:07:22.085304 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:07:22.085313 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:07:22.085322 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:07:22.085330 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:07:22.085337 | orchestrator | 2026-03-24 02:07:22.085345 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-24 02:07:22.085353 | orchestrator | Tuesday 24 March 2026 02:07:09 +0000 (0:00:01.536) 0:00:11.806 ********* 2026-03-24 02:07:22.085360 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 02:07:22.085368 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:07:22.085376 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 02:07:22.085384 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 02:07:22.085399 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 02:07:22.085463 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 02:07:22.085473 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 02:07:22.085481 | orchestrator | 2026-03-24 02:07:22.085490 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-24 02:07:22.085499 | orchestrator | Tuesday 24 March 2026 02:07:11 +0000 (0:00:01.640) 0:00:13.447 ********* 2026-03-24 02:07:22.085508 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.085516 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:22.085524 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:22.085532 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:22.085541 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:22.085549 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:22.085557 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:22.085564 | orchestrator | 2026-03-24 02:07:22.085573 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-24 02:07:22.085599 | orchestrator | Tuesday 24 March 2026 02:07:12 +0000 (0:00:00.987) 0:00:14.435 ********* 2026-03-24 02:07:22.085610 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:07:22.085619 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:07:22.085627 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:07:22.085634 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:07:22.085641 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:07:22.085648 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:07:22.085655 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:07:22.085662 | orchestrator | 2026-03-24 02:07:22.085670 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-24 02:07:22.085677 | orchestrator | Tuesday 24 March 2026 02:07:13 +0000 (0:00:00.571) 0:00:15.007 ********* 2026-03-24 02:07:22.085684 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.085692 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:22.085700 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:22.085707 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:22.085715 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:22.085723 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:22.085730 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:22.085738 | orchestrator | 2026-03-24 02:07:22.085745 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-24 02:07:22.085752 | orchestrator | Tuesday 24 March 2026 02:07:15 +0000 (0:00:02.228) 0:00:17.235 ********* 2026-03-24 02:07:22.085760 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:07:22.085767 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:07:22.085774 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:07:22.085782 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:07:22.085789 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:07:22.085797 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:07:22.085806 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-24 02:07:22.085815 | orchestrator | 2026-03-24 02:07:22.085823 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-24 02:07:22.085831 | orchestrator | Tuesday 24 March 2026 02:07:16 +0000 (0:00:00.826) 0:00:18.062 ********* 2026-03-24 02:07:22.085840 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.085848 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:07:22.085856 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:07:22.085865 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:07:22.085872 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:07:22.085880 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:07:22.085888 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:07:22.085896 | orchestrator | 2026-03-24 02:07:22.085904 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-24 02:07:22.085912 | orchestrator | Tuesday 24 March 2026 02:07:17 +0000 (0:00:01.667) 0:00:19.730 ********* 2026-03-24 02:07:22.085921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:07:22.085942 | orchestrator | 2026-03-24 02:07:22.085951 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-24 02:07:22.085959 | orchestrator | Tuesday 24 March 2026 02:07:19 +0000 (0:00:01.220) 0:00:20.950 ********* 2026-03-24 02:07:22.085966 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.085974 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:22.085982 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:22.085990 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:22.086005 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:22.086068 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:22.086078 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:22.086086 | orchestrator | 2026-03-24 02:07:22.086094 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-24 02:07:22.086102 | orchestrator | Tuesday 24 March 2026 02:07:20 +0000 (0:00:01.135) 0:00:22.085 ********* 2026-03-24 02:07:22.086110 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:22.086119 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:22.086127 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:22.086135 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:22.086143 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:22.086151 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:22.086159 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:22.086167 | orchestrator | 2026-03-24 02:07:22.086175 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-24 02:07:22.086183 | orchestrator | Tuesday 24 March 2026 02:07:20 +0000 (0:00:00.645) 0:00:22.730 ********* 2026-03-24 02:07:22.086192 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-24 02:07:22.086200 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-24 02:07:22.086208 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-24 02:07:22.086216 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-24 02:07:22.086225 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-24 02:07:22.086233 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-24 02:07:22.086242 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-24 02:07:22.086251 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-24 02:07:22.086260 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-24 02:07:22.086268 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-24 02:07:22.086277 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-24 02:07:22.086286 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-24 02:07:22.086293 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-24 02:07:22.086302 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-24 02:07:22.086310 | orchestrator | 2026-03-24 02:07:22.086330 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-24 02:07:36.984270 | orchestrator | Tuesday 24 March 2026 02:07:22 +0000 (0:00:01.240) 0:00:23.971 ********* 2026-03-24 02:07:36.984374 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:07:36.984388 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:07:36.984398 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:07:36.984470 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:07:36.984480 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:07:36.984490 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:07:36.984499 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:07:36.984509 | orchestrator | 2026-03-24 02:07:36.984540 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-24 02:07:36.984554 | orchestrator | Tuesday 24 March 2026 02:07:22 +0000 (0:00:00.623) 0:00:24.595 ********* 2026-03-24 02:07:36.984572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-5, testbed-node-2, testbed-node-4, testbed-node-3 2026-03-24 02:07:36.984589 | orchestrator | 2026-03-24 02:07:36.984604 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-24 02:07:36.984619 | orchestrator | Tuesday 24 March 2026 02:07:26 +0000 (0:00:04.117) 0:00:28.713 ********* 2026-03-24 02:07:36.984635 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984666 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.984680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.984799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.984816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.984855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.984884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.984900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.984916 | orchestrator | 2026-03-24 02:07:36.984930 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-24 02:07:36.984944 | orchestrator | Tuesday 24 March 2026 02:07:31 +0000 (0:00:04.935) 0:00:33.648 ********* 2026-03-24 02:07:36.984959 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.984989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.985005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.985020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.985042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.985059 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.985075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-24 02:07:36.985092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.985108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.985124 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.985145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:36.985164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:43.709102 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-24 02:07:43.709271 | orchestrator | 2026-03-24 02:07:43.709297 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-24 02:07:43.709318 | orchestrator | Tuesday 24 March 2026 02:07:36 +0000 (0:00:05.223) 0:00:38.871 ********* 2026-03-24 02:07:43.709339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:07:43.709358 | orchestrator | 2026-03-24 02:07:43.709375 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-24 02:07:43.709393 | orchestrator | Tuesday 24 March 2026 02:07:38 +0000 (0:00:01.274) 0:00:40.146 ********* 2026-03-24 02:07:43.709440 | orchestrator | ok: [testbed-manager] 2026-03-24 02:07:43.709458 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:07:43.709475 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:07:43.709492 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:07:43.709510 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:07:43.709528 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:07:43.709543 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:07:43.709554 | orchestrator | 2026-03-24 02:07:43.709565 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-24 02:07:43.709576 | orchestrator | Tuesday 24 March 2026 02:07:40 +0000 (0:00:01.881) 0:00:42.027 ********* 2026-03-24 02:07:43.709587 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-24 02:07:43.709599 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-24 02:07:43.709609 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-24 02:07:43.709620 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-24 02:07:43.709630 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:07:43.709642 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-24 02:07:43.709652 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-24 02:07:43.709663 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-24 02:07:43.709674 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-24 02:07:43.709684 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:07:43.709694 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-24 02:07:43.709725 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-24 02:07:43.709736 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-24 02:07:43.709746 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-24 02:07:43.709785 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:07:43.709795 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-24 02:07:43.709806 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-24 02:07:43.709816 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-24 02:07:43.709826 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-24 02:07:43.709836 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:07:43.709847 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-24 02:07:43.709857 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-24 02:07:43.709867 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-24 02:07:43.709878 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-24 02:07:43.709888 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:07:43.709898 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-24 02:07:43.709908 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-24 02:07:43.709918 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-24 02:07:43.709927 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-24 02:07:43.709938 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:07:43.709948 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-24 02:07:43.709958 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-24 02:07:43.709968 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-24 02:07:43.709978 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-24 02:07:43.709988 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:07:43.709998 | orchestrator | 2026-03-24 02:07:43.710008 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-24 02:07:43.710116 | orchestrator | Tuesday 24 March 2026 02:07:42 +0000 (0:00:01.936) 0:00:43.963 ********* 2026-03-24 02:07:43.710128 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:07:43.710149 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:07:43.710159 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:07:43.710169 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:07:43.710179 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:07:43.710190 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:07:43.710200 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:07:43.710210 | orchestrator | 2026-03-24 02:07:43.710220 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-24 02:07:43.710231 | orchestrator | Tuesday 24 March 2026 02:07:42 +0000 (0:00:00.621) 0:00:44.585 ********* 2026-03-24 02:07:43.710241 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:07:43.710251 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:07:43.710261 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:07:43.710272 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:07:43.710282 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:07:43.710292 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:07:43.710302 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:07:43.710313 | orchestrator | 2026-03-24 02:07:43.710323 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:07:43.710335 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 02:07:43.710348 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 02:07:43.710369 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 02:07:43.710380 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 02:07:43.710390 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 02:07:43.710419 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 02:07:43.710430 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 02:07:43.710453 | orchestrator | 2026-03-24 02:07:43.710473 | orchestrator | 2026-03-24 02:07:43.710483 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:07:43.710494 | orchestrator | Tuesday 24 March 2026 02:07:43 +0000 (0:00:00.662) 0:00:45.248 ********* 2026-03-24 02:07:43.710511 | orchestrator | =============================================================================== 2026-03-24 02:07:43.710521 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.22s 2026-03-24 02:07:43.710531 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.94s 2026-03-24 02:07:43.710542 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.12s 2026-03-24 02:07:43.710552 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.50s 2026-03-24 02:07:43.710562 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.25s 2026-03-24 02:07:43.710572 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.23s 2026-03-24 02:07:43.710582 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.94s 2026-03-24 02:07:43.710592 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.88s 2026-03-24 02:07:43.710603 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.70s 2026-03-24 02:07:43.710613 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2026-03-24 02:07:43.710623 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.64s 2026-03-24 02:07:43.710633 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.54s 2026-03-24 02:07:43.710643 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2026-03-24 02:07:43.710653 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.24s 2026-03-24 02:07:43.710663 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2026-03-24 02:07:43.710673 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.14s 2026-03-24 02:07:43.710684 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.11s 2026-03-24 02:07:43.710694 | orchestrator | osism.commons.network : Create required directories --------------------- 1.00s 2026-03-24 02:07:43.710704 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.99s 2026-03-24 02:07:43.710714 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.83s 2026-03-24 02:07:43.992464 | orchestrator | + osism apply wireguard 2026-03-24 02:07:56.003972 | orchestrator | 2026-03-24 02:07:55 | INFO  | Task 0a2ad016-d8c5-4c2e-a617-75326656ab36 (wireguard) was prepared for execution. 2026-03-24 02:07:56.004059 | orchestrator | 2026-03-24 02:07:55 | INFO  | It takes a moment until task 0a2ad016-d8c5-4c2e-a617-75326656ab36 (wireguard) has been started and output is visible here. 2026-03-24 02:08:15.035471 | orchestrator | 2026-03-24 02:08:15.035618 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-24 02:08:15.035637 | orchestrator | 2026-03-24 02:08:15.035649 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-24 02:08:15.035661 | orchestrator | Tuesday 24 March 2026 02:08:00 +0000 (0:00:00.216) 0:00:00.216 ********* 2026-03-24 02:08:15.035673 | orchestrator | ok: [testbed-manager] 2026-03-24 02:08:15.035686 | orchestrator | 2026-03-24 02:08:15.035697 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-24 02:08:15.035709 | orchestrator | Tuesday 24 March 2026 02:08:01 +0000 (0:00:01.447) 0:00:01.663 ********* 2026-03-24 02:08:15.035720 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:15.035736 | orchestrator | 2026-03-24 02:08:15.035749 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-24 02:08:15.035760 | orchestrator | Tuesday 24 March 2026 02:08:07 +0000 (0:00:06.165) 0:00:07.829 ********* 2026-03-24 02:08:15.035772 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:15.035783 | orchestrator | 2026-03-24 02:08:15.035794 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-24 02:08:15.035806 | orchestrator | Tuesday 24 March 2026 02:08:08 +0000 (0:00:00.539) 0:00:08.368 ********* 2026-03-24 02:08:15.035817 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:15.035828 | orchestrator | 2026-03-24 02:08:15.035840 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-24 02:08:15.035851 | orchestrator | Tuesday 24 March 2026 02:08:08 +0000 (0:00:00.434) 0:00:08.802 ********* 2026-03-24 02:08:15.035862 | orchestrator | ok: [testbed-manager] 2026-03-24 02:08:15.035874 | orchestrator | 2026-03-24 02:08:15.035885 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-24 02:08:15.035897 | orchestrator | Tuesday 24 March 2026 02:08:09 +0000 (0:00:00.634) 0:00:09.437 ********* 2026-03-24 02:08:15.035908 | orchestrator | ok: [testbed-manager] 2026-03-24 02:08:15.035919 | orchestrator | 2026-03-24 02:08:15.035930 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-24 02:08:15.035942 | orchestrator | Tuesday 24 March 2026 02:08:09 +0000 (0:00:00.389) 0:00:09.827 ********* 2026-03-24 02:08:15.035953 | orchestrator | ok: [testbed-manager] 2026-03-24 02:08:15.035966 | orchestrator | 2026-03-24 02:08:15.035979 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-24 02:08:15.035993 | orchestrator | Tuesday 24 March 2026 02:08:10 +0000 (0:00:00.419) 0:00:10.246 ********* 2026-03-24 02:08:15.036006 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:15.036018 | orchestrator | 2026-03-24 02:08:15.036032 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-24 02:08:15.036045 | orchestrator | Tuesday 24 March 2026 02:08:11 +0000 (0:00:01.104) 0:00:11.351 ********* 2026-03-24 02:08:15.036059 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-24 02:08:15.036072 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:15.036085 | orchestrator | 2026-03-24 02:08:15.036098 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-24 02:08:15.036117 | orchestrator | Tuesday 24 March 2026 02:08:12 +0000 (0:00:00.881) 0:00:12.232 ********* 2026-03-24 02:08:15.036137 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:15.036157 | orchestrator | 2026-03-24 02:08:15.036179 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-24 02:08:15.036201 | orchestrator | Tuesday 24 March 2026 02:08:13 +0000 (0:00:01.665) 0:00:13.898 ********* 2026-03-24 02:08:15.036223 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:15.036245 | orchestrator | 2026-03-24 02:08:15.036260 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:08:15.036274 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:08:15.036288 | orchestrator | 2026-03-24 02:08:15.036301 | orchestrator | 2026-03-24 02:08:15.036314 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:08:15.036336 | orchestrator | Tuesday 24 March 2026 02:08:14 +0000 (0:00:00.840) 0:00:14.738 ********* 2026-03-24 02:08:15.036349 | orchestrator | =============================================================================== 2026-03-24 02:08:15.036361 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.17s 2026-03-24 02:08:15.036373 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2026-03-24 02:08:15.036384 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.45s 2026-03-24 02:08:15.036425 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.10s 2026-03-24 02:08:15.036437 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2026-03-24 02:08:15.036448 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.84s 2026-03-24 02:08:15.036459 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.63s 2026-03-24 02:08:15.036470 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2026-03-24 02:08:15.036482 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-03-24 02:08:15.036493 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-24 02:08:15.036505 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2026-03-24 02:08:15.303287 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-24 02:08:15.336460 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-24 02:08:15.336562 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-24 02:08:15.417536 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 185 0 --:--:-- --:--:-- --:--:-- 187 2026-03-24 02:08:15.431514 | orchestrator | + osism apply --environment custom workarounds 2026-03-24 02:08:17.362137 | orchestrator | 2026-03-24 02:08:17 | INFO  | Trying to run play workarounds in environment custom 2026-03-24 02:08:27.506595 | orchestrator | 2026-03-24 02:08:27 | INFO  | Task ae55efd4-d046-4c0a-8e38-dad3a29b7cc9 (workarounds) was prepared for execution. 2026-03-24 02:08:27.506740 | orchestrator | 2026-03-24 02:08:27 | INFO  | It takes a moment until task ae55efd4-d046-4c0a-8e38-dad3a29b7cc9 (workarounds) has been started and output is visible here. 2026-03-24 02:08:50.524028 | orchestrator | 2026-03-24 02:08:50.524144 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:08:50.524160 | orchestrator | 2026-03-24 02:08:50.524172 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-24 02:08:50.524184 | orchestrator | Tuesday 24 March 2026 02:08:31 +0000 (0:00:00.091) 0:00:00.091 ********* 2026-03-24 02:08:50.524196 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-24 02:08:50.524208 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-24 02:08:50.524219 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-24 02:08:50.524231 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-24 02:08:50.524240 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-24 02:08:50.524251 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-24 02:08:50.524263 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-24 02:08:50.524274 | orchestrator | 2026-03-24 02:08:50.524285 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-24 02:08:50.524297 | orchestrator | 2026-03-24 02:08:50.524308 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-24 02:08:50.524319 | orchestrator | Tuesday 24 March 2026 02:08:31 +0000 (0:00:00.565) 0:00:00.656 ********* 2026-03-24 02:08:50.524331 | orchestrator | ok: [testbed-manager] 2026-03-24 02:08:50.524365 | orchestrator | 2026-03-24 02:08:50.524376 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-24 02:08:50.524507 | orchestrator | 2026-03-24 02:08:50.524520 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-24 02:08:50.524531 | orchestrator | Tuesday 24 March 2026 02:08:33 +0000 (0:00:01.934) 0:00:02.591 ********* 2026-03-24 02:08:50.524542 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:08:50.524553 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:08:50.524564 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:08:50.524576 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:08:50.524587 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:08:50.524598 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:08:50.524609 | orchestrator | 2026-03-24 02:08:50.524621 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-24 02:08:50.524632 | orchestrator | 2026-03-24 02:08:50.524643 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-24 02:08:50.524669 | orchestrator | Tuesday 24 March 2026 02:08:35 +0000 (0:00:01.723) 0:00:04.315 ********* 2026-03-24 02:08:50.524682 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-24 02:08:50.524699 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-24 02:08:50.524715 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-24 02:08:50.524732 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-24 02:08:50.524749 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-24 02:08:50.524765 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-24 02:08:50.524782 | orchestrator | 2026-03-24 02:08:50.524799 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-24 02:08:50.524813 | orchestrator | Tuesday 24 March 2026 02:08:36 +0000 (0:00:01.404) 0:00:05.720 ********* 2026-03-24 02:08:50.524830 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:08:50.524848 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:08:50.524864 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:08:50.524876 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:08:50.524886 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:08:50.524897 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:08:50.524907 | orchestrator | 2026-03-24 02:08:50.524917 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-24 02:08:50.524928 | orchestrator | Tuesday 24 March 2026 02:08:39 +0000 (0:00:03.046) 0:00:08.767 ********* 2026-03-24 02:08:50.524937 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:08:50.524948 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:08:50.524959 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:08:50.524969 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:08:50.524979 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:08:50.524990 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:08:50.525002 | orchestrator | 2026-03-24 02:08:50.525012 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-24 02:08:50.525024 | orchestrator | 2026-03-24 02:08:50.525035 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-24 02:08:50.525046 | orchestrator | Tuesday 24 March 2026 02:08:40 +0000 (0:00:00.644) 0:00:09.412 ********* 2026-03-24 02:08:50.525056 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:08:50.525066 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:08:50.525076 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:08:50.525087 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:08:50.525098 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:08:50.525108 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:08:50.525131 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:50.525141 | orchestrator | 2026-03-24 02:08:50.525151 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-24 02:08:50.525161 | orchestrator | Tuesday 24 March 2026 02:08:42 +0000 (0:00:01.516) 0:00:10.928 ********* 2026-03-24 02:08:50.525172 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:08:50.525183 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:08:50.525195 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:08:50.525206 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:08:50.525217 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:08:50.525228 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:08:50.525259 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:50.525270 | orchestrator | 2026-03-24 02:08:50.525281 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-24 02:08:50.525292 | orchestrator | Tuesday 24 March 2026 02:08:43 +0000 (0:00:01.498) 0:00:12.427 ********* 2026-03-24 02:08:50.525303 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:08:50.525313 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:08:50.525324 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:08:50.525334 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:08:50.525344 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:08:50.525355 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:08:50.525365 | orchestrator | ok: [testbed-manager] 2026-03-24 02:08:50.525375 | orchestrator | 2026-03-24 02:08:50.525408 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-24 02:08:50.525420 | orchestrator | Tuesday 24 March 2026 02:08:45 +0000 (0:00:01.539) 0:00:13.967 ********* 2026-03-24 02:08:50.525429 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:08:50.525440 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:08:50.525451 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:08:50.525461 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:08:50.525471 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:08:50.525482 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:08:50.525492 | orchestrator | changed: [testbed-manager] 2026-03-24 02:08:50.525502 | orchestrator | 2026-03-24 02:08:50.525513 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-24 02:08:50.525524 | orchestrator | Tuesday 24 March 2026 02:08:46 +0000 (0:00:01.697) 0:00:15.664 ********* 2026-03-24 02:08:50.525535 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:08:50.525547 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:08:50.525557 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:08:50.525568 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:08:50.525578 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:08:50.525588 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:08:50.525599 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:08:50.525610 | orchestrator | 2026-03-24 02:08:50.525621 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-24 02:08:50.525632 | orchestrator | 2026-03-24 02:08:50.525642 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-24 02:08:50.525653 | orchestrator | Tuesday 24 March 2026 02:08:47 +0000 (0:00:00.603) 0:00:16.268 ********* 2026-03-24 02:08:50.525663 | orchestrator | ok: [testbed-manager] 2026-03-24 02:08:50.525674 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:08:50.525684 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:08:50.525694 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:08:50.525705 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:08:50.525725 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:08:50.525736 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:08:50.525746 | orchestrator | 2026-03-24 02:08:50.525757 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:08:50.525769 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:08:50.525782 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:08:50.525803 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:08:50.525815 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:08:50.525826 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:08:50.525837 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:08:50.525848 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:08:50.525859 | orchestrator | 2026-03-24 02:08:50.525870 | orchestrator | 2026-03-24 02:08:50.525880 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:08:50.525891 | orchestrator | Tuesday 24 March 2026 02:08:50 +0000 (0:00:02.996) 0:00:19.264 ********* 2026-03-24 02:08:50.525901 | orchestrator | =============================================================================== 2026-03-24 02:08:50.525912 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.05s 2026-03-24 02:08:50.525923 | orchestrator | Install python3-docker -------------------------------------------------- 3.00s 2026-03-24 02:08:50.525933 | orchestrator | Apply netplan configuration --------------------------------------------- 1.94s 2026-03-24 02:08:50.525944 | orchestrator | Apply netplan configuration --------------------------------------------- 1.72s 2026-03-24 02:08:50.525955 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.70s 2026-03-24 02:08:50.525965 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2026-03-24 02:08:50.525977 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.52s 2026-03-24 02:08:50.525988 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2026-03-24 02:08:50.525998 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.40s 2026-03-24 02:08:50.526009 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.64s 2026-03-24 02:08:50.526068 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.60s 2026-03-24 02:08:50.526093 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.57s 2026-03-24 02:08:51.114269 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-24 02:09:03.254725 | orchestrator | 2026-03-24 02:09:03 | INFO  | Task 0071dbd7-0939-4b8f-b1b6-87636c8b15ac (reboot) was prepared for execution. 2026-03-24 02:09:03.254864 | orchestrator | 2026-03-24 02:09:03 | INFO  | It takes a moment until task 0071dbd7-0939-4b8f-b1b6-87636c8b15ac (reboot) has been started and output is visible here. 2026-03-24 02:09:12.954541 | orchestrator | 2026-03-24 02:09:12.954649 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-24 02:09:12.954664 | orchestrator | 2026-03-24 02:09:12.954675 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-24 02:09:12.954685 | orchestrator | Tuesday 24 March 2026 02:09:07 +0000 (0:00:00.166) 0:00:00.166 ********* 2026-03-24 02:09:12.954694 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:09:12.954704 | orchestrator | 2026-03-24 02:09:12.954714 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-24 02:09:12.954723 | orchestrator | Tuesday 24 March 2026 02:09:07 +0000 (0:00:00.081) 0:00:00.248 ********* 2026-03-24 02:09:12.954733 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:09:12.954742 | orchestrator | 2026-03-24 02:09:12.954751 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-24 02:09:12.954782 | orchestrator | Tuesday 24 March 2026 02:09:08 +0000 (0:00:00.906) 0:00:01.154 ********* 2026-03-24 02:09:12.954792 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:09:12.954801 | orchestrator | 2026-03-24 02:09:12.954811 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-24 02:09:12.954820 | orchestrator | 2026-03-24 02:09:12.954829 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-24 02:09:12.954838 | orchestrator | Tuesday 24 March 2026 02:09:08 +0000 (0:00:00.094) 0:00:01.249 ********* 2026-03-24 02:09:12.954847 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:09:12.954856 | orchestrator | 2026-03-24 02:09:12.954880 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-24 02:09:12.954898 | orchestrator | Tuesday 24 March 2026 02:09:08 +0000 (0:00:00.086) 0:00:01.335 ********* 2026-03-24 02:09:12.954908 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:09:12.954917 | orchestrator | 2026-03-24 02:09:12.954927 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-24 02:09:12.954952 | orchestrator | Tuesday 24 March 2026 02:09:09 +0000 (0:00:00.693) 0:00:02.029 ********* 2026-03-24 02:09:12.954974 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:09:12.954993 | orchestrator | 2026-03-24 02:09:12.955008 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-24 02:09:12.955023 | orchestrator | 2026-03-24 02:09:12.955036 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-24 02:09:12.955051 | orchestrator | Tuesday 24 March 2026 02:09:09 +0000 (0:00:00.121) 0:00:02.150 ********* 2026-03-24 02:09:12.955065 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:09:12.955081 | orchestrator | 2026-03-24 02:09:12.955095 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-24 02:09:12.955112 | orchestrator | Tuesday 24 March 2026 02:09:09 +0000 (0:00:00.153) 0:00:02.304 ********* 2026-03-24 02:09:12.955128 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:09:12.955145 | orchestrator | 2026-03-24 02:09:12.955162 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-24 02:09:12.955180 | orchestrator | Tuesday 24 March 2026 02:09:10 +0000 (0:00:00.643) 0:00:02.948 ********* 2026-03-24 02:09:12.955195 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:09:12.955212 | orchestrator | 2026-03-24 02:09:12.955229 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-24 02:09:12.955246 | orchestrator | 2026-03-24 02:09:12.955262 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-24 02:09:12.955278 | orchestrator | Tuesday 24 March 2026 02:09:10 +0000 (0:00:00.106) 0:00:03.055 ********* 2026-03-24 02:09:12.955293 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:09:12.955309 | orchestrator | 2026-03-24 02:09:12.955323 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-24 02:09:12.955338 | orchestrator | Tuesday 24 March 2026 02:09:10 +0000 (0:00:00.083) 0:00:03.138 ********* 2026-03-24 02:09:12.955354 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:09:12.955370 | orchestrator | 2026-03-24 02:09:12.955408 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-24 02:09:12.955424 | orchestrator | Tuesday 24 March 2026 02:09:10 +0000 (0:00:00.654) 0:00:03.793 ********* 2026-03-24 02:09:12.955438 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:09:12.955448 | orchestrator | 2026-03-24 02:09:12.955458 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-24 02:09:12.955467 | orchestrator | 2026-03-24 02:09:12.955476 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-24 02:09:12.955485 | orchestrator | Tuesday 24 March 2026 02:09:10 +0000 (0:00:00.112) 0:00:03.905 ********* 2026-03-24 02:09:12.955494 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:09:12.955503 | orchestrator | 2026-03-24 02:09:12.955512 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-24 02:09:12.955532 | orchestrator | Tuesday 24 March 2026 02:09:11 +0000 (0:00:00.108) 0:00:04.014 ********* 2026-03-24 02:09:12.955541 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:09:12.955550 | orchestrator | 2026-03-24 02:09:12.955559 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-24 02:09:12.955569 | orchestrator | Tuesday 24 March 2026 02:09:11 +0000 (0:00:00.661) 0:00:04.675 ********* 2026-03-24 02:09:12.955578 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:09:12.955587 | orchestrator | 2026-03-24 02:09:12.955596 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-24 02:09:12.955606 | orchestrator | 2026-03-24 02:09:12.955615 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-24 02:09:12.955624 | orchestrator | Tuesday 24 March 2026 02:09:11 +0000 (0:00:00.117) 0:00:04.793 ********* 2026-03-24 02:09:12.955633 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:09:12.955642 | orchestrator | 2026-03-24 02:09:12.955651 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-24 02:09:12.955660 | orchestrator | Tuesday 24 March 2026 02:09:11 +0000 (0:00:00.099) 0:00:04.892 ********* 2026-03-24 02:09:12.955669 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:09:12.955678 | orchestrator | 2026-03-24 02:09:12.955687 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-24 02:09:12.955697 | orchestrator | Tuesday 24 March 2026 02:09:12 +0000 (0:00:00.653) 0:00:05.546 ********* 2026-03-24 02:09:12.955722 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:09:12.955732 | orchestrator | 2026-03-24 02:09:12.955741 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:09:12.955751 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:09:12.955774 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:09:12.955783 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:09:12.955801 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:09:12.955811 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:09:12.955820 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:09:12.955829 | orchestrator | 2026-03-24 02:09:12.955840 | orchestrator | 2026-03-24 02:09:12.955856 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:09:12.955878 | orchestrator | Tuesday 24 March 2026 02:09:12 +0000 (0:00:00.040) 0:00:05.586 ********* 2026-03-24 02:09:12.955904 | orchestrator | =============================================================================== 2026-03-24 02:09:12.955920 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.21s 2026-03-24 02:09:12.955936 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.61s 2026-03-24 02:09:12.955957 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.59s 2026-03-24 02:09:13.219895 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-24 02:09:25.197969 | orchestrator | 2026-03-24 02:09:25 | INFO  | Task bcd33043-9eae-402a-a6d4-92b366be76ce (wait-for-connection) was prepared for execution. 2026-03-24 02:09:25.198114 | orchestrator | 2026-03-24 02:09:25 | INFO  | It takes a moment until task bcd33043-9eae-402a-a6d4-92b366be76ce (wait-for-connection) has been started and output is visible here. 2026-03-24 02:09:40.996816 | orchestrator | 2026-03-24 02:09:40.996921 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-24 02:09:40.996933 | orchestrator | 2026-03-24 02:09:40.996940 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-24 02:09:40.996947 | orchestrator | Tuesday 24 March 2026 02:09:29 +0000 (0:00:00.230) 0:00:00.230 ********* 2026-03-24 02:09:40.996954 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:09:40.996961 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:09:40.996968 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:09:40.996975 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:09:40.996981 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:09:40.996987 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:09:40.996993 | orchestrator | 2026-03-24 02:09:40.996998 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:09:40.997006 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:09:40.997013 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:09:40.997020 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:09:40.997026 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:09:40.997032 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:09:40.997038 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:09:40.997045 | orchestrator | 2026-03-24 02:09:40.997052 | orchestrator | 2026-03-24 02:09:40.997059 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:09:40.997065 | orchestrator | Tuesday 24 March 2026 02:09:40 +0000 (0:00:11.553) 0:00:11.784 ********* 2026-03-24 02:09:40.997071 | orchestrator | =============================================================================== 2026-03-24 02:09:40.997076 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-03-24 02:09:41.179716 | orchestrator | + osism apply hddtemp 2026-03-24 02:09:52.965077 | orchestrator | 2026-03-24 02:09:52 | INFO  | Task 62f6aeba-e5f1-407b-80a0-c5d71ceca291 (hddtemp) was prepared for execution. 2026-03-24 02:09:52.965176 | orchestrator | 2026-03-24 02:09:52 | INFO  | It takes a moment until task 62f6aeba-e5f1-407b-80a0-c5d71ceca291 (hddtemp) has been started and output is visible here. 2026-03-24 02:10:22.823587 | orchestrator | 2026-03-24 02:10:22.823744 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-24 02:10:22.823763 | orchestrator | 2026-03-24 02:10:22.823776 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-24 02:10:22.823788 | orchestrator | Tuesday 24 March 2026 02:09:57 +0000 (0:00:00.257) 0:00:00.257 ********* 2026-03-24 02:10:22.823800 | orchestrator | ok: [testbed-manager] 2026-03-24 02:10:22.823813 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:10:22.823825 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:10:22.823837 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:10:22.823848 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:10:22.823859 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:10:22.823871 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:10:22.823882 | orchestrator | 2026-03-24 02:10:22.823894 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-24 02:10:22.823905 | orchestrator | Tuesday 24 March 2026 02:09:57 +0000 (0:00:00.682) 0:00:00.940 ********* 2026-03-24 02:10:22.823919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:10:22.823959 | orchestrator | 2026-03-24 02:10:22.823972 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-24 02:10:22.823983 | orchestrator | Tuesday 24 March 2026 02:09:58 +0000 (0:00:01.174) 0:00:02.114 ********* 2026-03-24 02:10:22.823995 | orchestrator | ok: [testbed-manager] 2026-03-24 02:10:22.824006 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:10:22.824017 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:10:22.824028 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:10:22.824041 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:10:22.824054 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:10:22.824067 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:10:22.824079 | orchestrator | 2026-03-24 02:10:22.824092 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-24 02:10:22.824123 | orchestrator | Tuesday 24 March 2026 02:10:01 +0000 (0:00:02.082) 0:00:04.197 ********* 2026-03-24 02:10:22.824136 | orchestrator | changed: [testbed-manager] 2026-03-24 02:10:22.824150 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:10:22.824163 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:10:22.824176 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:10:22.824188 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:10:22.824201 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:10:22.824213 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:10:22.824226 | orchestrator | 2026-03-24 02:10:22.824239 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-24 02:10:22.824251 | orchestrator | Tuesday 24 March 2026 02:10:02 +0000 (0:00:01.143) 0:00:05.341 ********* 2026-03-24 02:10:22.824264 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:10:22.824276 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:10:22.824290 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:10:22.824302 | orchestrator | ok: [testbed-manager] 2026-03-24 02:10:22.824314 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:10:22.824326 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:10:22.824337 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:10:22.824348 | orchestrator | 2026-03-24 02:10:22.824359 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-24 02:10:22.824371 | orchestrator | Tuesday 24 March 2026 02:10:04 +0000 (0:00:01.958) 0:00:07.300 ********* 2026-03-24 02:10:22.824382 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:10:22.824415 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:10:22.824427 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:10:22.824438 | orchestrator | changed: [testbed-manager] 2026-03-24 02:10:22.824450 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:10:22.824461 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:10:22.824472 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:10:22.824483 | orchestrator | 2026-03-24 02:10:22.824495 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-24 02:10:22.824506 | orchestrator | Tuesday 24 March 2026 02:10:04 +0000 (0:00:00.679) 0:00:07.979 ********* 2026-03-24 02:10:22.824517 | orchestrator | changed: [testbed-manager] 2026-03-24 02:10:22.824529 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:10:22.824540 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:10:22.824551 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:10:22.824562 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:10:22.824574 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:10:22.824585 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:10:22.824596 | orchestrator | 2026-03-24 02:10:22.824607 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-24 02:10:22.824619 | orchestrator | Tuesday 24 March 2026 02:10:19 +0000 (0:00:14.527) 0:00:22.507 ********* 2026-03-24 02:10:22.824631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:10:22.824651 | orchestrator | 2026-03-24 02:10:22.824663 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-24 02:10:22.824674 | orchestrator | Tuesday 24 March 2026 02:10:20 +0000 (0:00:01.180) 0:00:23.687 ********* 2026-03-24 02:10:22.824686 | orchestrator | changed: [testbed-manager] 2026-03-24 02:10:22.824697 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:10:22.824709 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:10:22.824720 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:10:22.824731 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:10:22.824743 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:10:22.824754 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:10:22.824765 | orchestrator | 2026-03-24 02:10:22.824776 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:10:22.824788 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:10:22.824820 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:10:22.824833 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:10:22.824845 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:10:22.824857 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:10:22.824868 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:10:22.824879 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:10:22.824891 | orchestrator | 2026-03-24 02:10:22.824902 | orchestrator | 2026-03-24 02:10:22.824914 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:10:22.824925 | orchestrator | Tuesday 24 March 2026 02:10:22 +0000 (0:00:01.937) 0:00:25.624 ********* 2026-03-24 02:10:22.824936 | orchestrator | =============================================================================== 2026-03-24 02:10:22.824948 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.53s 2026-03-24 02:10:22.824959 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.08s 2026-03-24 02:10:22.824971 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.96s 2026-03-24 02:10:22.824987 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.94s 2026-03-24 02:10:22.824999 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2026-03-24 02:10:22.825010 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2026-03-24 02:10:22.825021 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2026-03-24 02:10:22.825033 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.68s 2026-03-24 02:10:22.825044 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.68s 2026-03-24 02:10:23.087974 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-24 02:10:23.140989 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 02:10:23.141080 | orchestrator | + sudo systemctl restart manager.service 2026-03-24 02:10:37.332631 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-24 02:10:37.332715 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-24 02:10:37.332723 | orchestrator | + local max_attempts=60 2026-03-24 02:10:37.332729 | orchestrator | + local name=ceph-ansible 2026-03-24 02:10:37.332734 | orchestrator | + local attempt_num=1 2026-03-24 02:10:37.332740 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:10:37.369473 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:10:37.369588 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:10:37.369608 | orchestrator | + sleep 5 2026-03-24 02:10:42.374932 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:10:42.395294 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:10:42.395454 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:10:42.395476 | orchestrator | + sleep 5 2026-03-24 02:10:47.399029 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:10:47.435952 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:10:47.436048 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:10:47.436063 | orchestrator | + sleep 5 2026-03-24 02:10:52.439902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:10:52.469766 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:10:52.469874 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:10:52.469897 | orchestrator | + sleep 5 2026-03-24 02:10:57.474065 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:10:57.511147 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:10:57.511252 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:10:57.511268 | orchestrator | + sleep 5 2026-03-24 02:11:02.515463 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:02.553712 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:02.553798 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:02.553804 | orchestrator | + sleep 5 2026-03-24 02:11:07.558409 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:07.594578 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:07.594683 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:07.594698 | orchestrator | + sleep 5 2026-03-24 02:11:12.598173 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:12.630845 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:12.630947 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:12.630963 | orchestrator | + sleep 5 2026-03-24 02:11:17.632713 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:17.708241 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:17.708358 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:17.708382 | orchestrator | + sleep 5 2026-03-24 02:11:22.711233 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:22.749845 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:22.749944 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:22.749961 | orchestrator | + sleep 5 2026-03-24 02:11:27.754809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:27.778098 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:27.778186 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:27.778198 | orchestrator | + sleep 5 2026-03-24 02:11:32.783435 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:32.824885 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:32.824990 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:32.825000 | orchestrator | + sleep 5 2026-03-24 02:11:37.829662 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:37.866535 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:37.866648 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-24 02:11:37.866665 | orchestrator | + sleep 5 2026-03-24 02:11:42.871466 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-24 02:11:42.911560 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:42.911657 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-24 02:11:42.911673 | orchestrator | + local max_attempts=60 2026-03-24 02:11:42.911685 | orchestrator | + local name=kolla-ansible 2026-03-24 02:11:42.911697 | orchestrator | + local attempt_num=1 2026-03-24 02:11:42.913297 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-24 02:11:42.948815 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:42.948914 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-24 02:11:42.948962 | orchestrator | + local max_attempts=60 2026-03-24 02:11:42.948976 | orchestrator | + local name=osism-ansible 2026-03-24 02:11:42.948987 | orchestrator | + local attempt_num=1 2026-03-24 02:11:42.949872 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-24 02:11:42.984375 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 02:11:42.984550 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-24 02:11:42.984568 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-24 02:11:43.123777 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-24 02:11:43.261604 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-24 02:11:43.421650 | orchestrator | ARA in osism-ansible already disabled. 2026-03-24 02:11:43.554081 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-24 02:11:43.554368 | orchestrator | + osism apply gather-facts 2026-03-24 02:11:55.411171 | orchestrator | 2026-03-24 02:11:55 | INFO  | Task 30f3e2be-6637-4153-9d1b-9db2292b4a7a (gather-facts) was prepared for execution. 2026-03-24 02:11:55.411291 | orchestrator | 2026-03-24 02:11:55 | INFO  | It takes a moment until task 30f3e2be-6637-4153-9d1b-9db2292b4a7a (gather-facts) has been started and output is visible here. 2026-03-24 02:12:08.610095 | orchestrator | 2026-03-24 02:12:08.610478 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-24 02:12:08.610513 | orchestrator | 2026-03-24 02:12:08.610529 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 02:12:08.610544 | orchestrator | Tuesday 24 March 2026 02:11:59 +0000 (0:00:00.182) 0:00:00.182 ********* 2026-03-24 02:12:08.610559 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:12:08.610573 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:12:08.610588 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:12:08.610600 | orchestrator | ok: [testbed-manager] 2026-03-24 02:12:08.610613 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:12:08.610626 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:12:08.610639 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:12:08.610651 | orchestrator | 2026-03-24 02:12:08.610664 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-24 02:12:08.610677 | orchestrator | 2026-03-24 02:12:08.610690 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-24 02:12:08.610703 | orchestrator | Tuesday 24 March 2026 02:12:07 +0000 (0:00:08.669) 0:00:08.852 ********* 2026-03-24 02:12:08.610716 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:12:08.610730 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:12:08.610743 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:12:08.610756 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:12:08.610768 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:12:08.610781 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:12:08.610794 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:12:08.610807 | orchestrator | 2026-03-24 02:12:08.610820 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:12:08.610834 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:12:08.610849 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:12:08.610862 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:12:08.610876 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:12:08.610888 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:12:08.610900 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:12:08.610938 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:12:08.610950 | orchestrator | 2026-03-24 02:12:08.610961 | orchestrator | 2026-03-24 02:12:08.610973 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:12:08.610984 | orchestrator | Tuesday 24 March 2026 02:12:08 +0000 (0:00:00.498) 0:00:09.350 ********* 2026-03-24 02:12:08.610995 | orchestrator | =============================================================================== 2026-03-24 02:12:08.611006 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.67s 2026-03-24 02:12:08.611018 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-03-24 02:12:08.936340 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-24 02:12:08.946830 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-24 02:12:08.957907 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-24 02:12:08.968640 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-24 02:12:08.979298 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-24 02:12:08.989869 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-24 02:12:09.000736 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-24 02:12:09.011259 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-24 02:12:09.021604 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-24 02:12:09.032017 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-24 02:12:09.042484 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-24 02:12:09.052990 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-24 02:12:09.068662 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-24 02:12:09.077898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-24 02:12:09.087704 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-24 02:12:09.097900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-24 02:12:09.108842 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-24 02:12:09.118849 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-24 02:12:09.134082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-24 02:12:09.143588 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-24 02:12:09.154167 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-24 02:12:09.163107 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-24 02:12:09.173062 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-24 02:12:09.186275 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-24 02:12:09.437315 | orchestrator | ok: Runtime: 0:24:46.148311 2026-03-24 02:12:09.548857 | 2026-03-24 02:12:09.549063 | TASK [Deploy services] 2026-03-24 02:12:10.250415 | orchestrator | 2026-03-24 02:12:10.250591 | orchestrator | # DEPLOY SERVICES 2026-03-24 02:12:10.250613 | orchestrator | 2026-03-24 02:12:10.250624 | orchestrator | + set -e 2026-03-24 02:12:10.250633 | orchestrator | + echo 2026-03-24 02:12:10.250643 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-24 02:12:10.250653 | orchestrator | + echo 2026-03-24 02:12:10.250685 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 02:12:10.250701 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 02:12:10.250712 | orchestrator | ++ INTERACTIVE=false 2026-03-24 02:12:10.250721 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 02:12:10.250736 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 02:12:10.250744 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 02:12:10.250755 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 02:12:10.250764 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 02:12:10.250788 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 02:12:10.250797 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 02:12:10.250808 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 02:12:10.250816 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 02:12:10.250827 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 02:12:10.250834 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 02:12:10.250842 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 02:12:10.250850 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 02:12:10.250858 | orchestrator | ++ export ARA=false 2026-03-24 02:12:10.250865 | orchestrator | ++ ARA=false 2026-03-24 02:12:10.250873 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 02:12:10.250881 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 02:12:10.250888 | orchestrator | ++ export TEMPEST=false 2026-03-24 02:12:10.250895 | orchestrator | ++ TEMPEST=false 2026-03-24 02:12:10.250903 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 02:12:10.250911 | orchestrator | ++ IS_ZUUL=true 2026-03-24 02:12:10.250919 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 02:12:10.250926 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 02:12:10.250935 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 02:12:10.250943 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 02:12:10.250951 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 02:12:10.250958 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 02:12:10.250966 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 02:12:10.250974 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 02:12:10.250981 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 02:12:10.250996 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 02:12:10.251003 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-24 02:12:10.260938 | orchestrator | + set -e 2026-03-24 02:12:10.261028 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 02:12:10.261044 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 02:12:10.261052 | orchestrator | ++ INTERACTIVE=false 2026-03-24 02:12:10.261059 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 02:12:10.261066 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 02:12:10.261073 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 02:12:10.261080 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 02:12:10.261086 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 02:12:10.261093 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 02:12:10.261099 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 02:12:10.261107 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 02:12:10.261114 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 02:12:10.261130 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 02:12:10.261138 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 02:12:10.261144 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 02:12:10.261150 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 02:12:10.261157 | orchestrator | ++ export ARA=false 2026-03-24 02:12:10.261165 | orchestrator | ++ ARA=false 2026-03-24 02:12:10.261170 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 02:12:10.261176 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 02:12:10.261182 | orchestrator | ++ export TEMPEST=false 2026-03-24 02:12:10.261191 | orchestrator | ++ TEMPEST=false 2026-03-24 02:12:10.261197 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 02:12:10.261204 | orchestrator | ++ IS_ZUUL=true 2026-03-24 02:12:10.261210 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 02:12:10.261216 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 02:12:10.261222 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 02:12:10.261228 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 02:12:10.261234 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 02:12:10.261241 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 02:12:10.261266 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 02:12:10.261273 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 02:12:10.261313 | orchestrator | 2026-03-24 02:12:10.261321 | orchestrator | # PULL IMAGES 2026-03-24 02:12:10.261328 | orchestrator | 2026-03-24 02:12:10.261334 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 02:12:10.261340 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 02:12:10.261347 | orchestrator | + echo 2026-03-24 02:12:10.261353 | orchestrator | + echo '# PULL IMAGES' 2026-03-24 02:12:10.261360 | orchestrator | + echo 2026-03-24 02:12:10.262958 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-24 02:12:10.326551 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 02:12:10.326653 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-24 02:12:12.159111 | orchestrator | 2026-03-24 02:12:12 | INFO  | Trying to run play pull-images in environment custom 2026-03-24 02:12:22.378561 | orchestrator | 2026-03-24 02:12:22 | INFO  | Task 0d82c962-ed23-4b61-9652-070fc78e6a8d (pull-images) was prepared for execution. 2026-03-24 02:12:22.378690 | orchestrator | 2026-03-24 02:12:22 | INFO  | Task 0d82c962-ed23-4b61-9652-070fc78e6a8d is running in background. No more output. Check ARA for logs. 2026-03-24 02:12:22.666508 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-24 02:12:34.683666 | orchestrator | 2026-03-24 02:12:34 | INFO  | Task 1920beb9-dc37-4390-9e3e-1554d43ba849 (cgit) was prepared for execution. 2026-03-24 02:12:34.683816 | orchestrator | 2026-03-24 02:12:34 | INFO  | Task 1920beb9-dc37-4390-9e3e-1554d43ba849 is running in background. No more output. Check ARA for logs. 2026-03-24 02:12:46.573852 | orchestrator | 2026-03-24 02:12:46 | INFO  | Task 5e5e6087-be4c-4903-b9c2-863d9b48a171 (dotfiles) was prepared for execution. 2026-03-24 02:12:46.573992 | orchestrator | 2026-03-24 02:12:46 | INFO  | Task 5e5e6087-be4c-4903-b9c2-863d9b48a171 is running in background. No more output. Check ARA for logs. 2026-03-24 02:12:58.929443 | orchestrator | 2026-03-24 02:12:58 | INFO  | Task d071d88b-df72-411d-88ff-2b80c4083b54 (homer) was prepared for execution. 2026-03-24 02:12:58.929529 | orchestrator | 2026-03-24 02:12:58 | INFO  | Task d071d88b-df72-411d-88ff-2b80c4083b54 is running in background. No more output. Check ARA for logs. 2026-03-24 02:13:11.445083 | orchestrator | 2026-03-24 02:13:11 | INFO  | Task 3d7cb187-58e7-469e-a6a9-9764799040ef (phpmyadmin) was prepared for execution. 2026-03-24 02:13:11.445174 | orchestrator | 2026-03-24 02:13:11 | INFO  | Task 3d7cb187-58e7-469e-a6a9-9764799040ef is running in background. No more output. Check ARA for logs. 2026-03-24 02:13:23.856196 | orchestrator | 2026-03-24 02:13:23 | INFO  | Task e3adb718-2d85-488e-ad4f-88a132036d20 (sosreport) was prepared for execution. 2026-03-24 02:13:23.856341 | orchestrator | 2026-03-24 02:13:23 | INFO  | Task e3adb718-2d85-488e-ad4f-88a132036d20 is running in background. No more output. Check ARA for logs. 2026-03-24 02:13:24.133486 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-24 02:13:24.140675 | orchestrator | + set -e 2026-03-24 02:13:24.140793 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 02:13:24.140810 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 02:13:24.140821 | orchestrator | ++ INTERACTIVE=false 2026-03-24 02:13:24.140833 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 02:13:24.140842 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 02:13:24.140850 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 02:13:24.140859 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 02:13:24.140868 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 02:13:24.140877 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 02:13:24.140885 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 02:13:24.140895 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 02:13:24.140904 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 02:13:24.140912 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 02:13:24.140921 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 02:13:24.140930 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 02:13:24.140939 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 02:13:24.140947 | orchestrator | ++ export ARA=false 2026-03-24 02:13:24.140956 | orchestrator | ++ ARA=false 2026-03-24 02:13:24.140965 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 02:13:24.140997 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 02:13:24.141006 | orchestrator | ++ export TEMPEST=false 2026-03-24 02:13:24.141019 | orchestrator | ++ TEMPEST=false 2026-03-24 02:13:24.141034 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 02:13:24.141054 | orchestrator | ++ IS_ZUUL=true 2026-03-24 02:13:24.141090 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 02:13:24.141112 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 02:13:24.141127 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 02:13:24.141142 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 02:13:24.141156 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 02:13:24.141172 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 02:13:24.141185 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 02:13:24.141194 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 02:13:24.141202 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 02:13:24.141211 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 02:13:24.141297 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-24 02:13:24.201834 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 02:13:24.201928 | orchestrator | + osism apply frr 2026-03-24 02:13:36.577792 | orchestrator | 2026-03-24 02:13:36 | INFO  | Task ebb0db9c-fe7f-4643-a9ad-64c83f72a874 (frr) was prepared for execution. 2026-03-24 02:13:36.577875 | orchestrator | 2026-03-24 02:13:36 | INFO  | It takes a moment until task ebb0db9c-fe7f-4643-a9ad-64c83f72a874 (frr) has been started and output is visible here. 2026-03-24 02:14:04.668328 | orchestrator | 2026-03-24 02:14:04.668548 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-24 02:14:04.668568 | orchestrator | 2026-03-24 02:14:04.668581 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-24 02:14:04.668602 | orchestrator | Tuesday 24 March 2026 02:13:42 +0000 (0:00:00.603) 0:00:00.603 ********* 2026-03-24 02:14:04.668614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 02:14:04.668626 | orchestrator | 2026-03-24 02:14:04.668637 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-24 02:14:04.668648 | orchestrator | Tuesday 24 March 2026 02:13:42 +0000 (0:00:00.291) 0:00:00.894 ********* 2026-03-24 02:14:04.668660 | orchestrator | changed: [testbed-manager] 2026-03-24 02:14:04.668672 | orchestrator | 2026-03-24 02:14:04.668683 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-24 02:14:04.668697 | orchestrator | Tuesday 24 March 2026 02:13:43 +0000 (0:00:00.999) 0:00:01.894 ********* 2026-03-24 02:14:04.668708 | orchestrator | changed: [testbed-manager] 2026-03-24 02:14:04.668718 | orchestrator | 2026-03-24 02:14:04.668729 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-24 02:14:04.668740 | orchestrator | Tuesday 24 March 2026 02:13:54 +0000 (0:00:10.339) 0:00:12.233 ********* 2026-03-24 02:14:04.668751 | orchestrator | ok: [testbed-manager] 2026-03-24 02:14:04.668762 | orchestrator | 2026-03-24 02:14:04.668773 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-24 02:14:04.668784 | orchestrator | Tuesday 24 March 2026 02:13:55 +0000 (0:00:00.940) 0:00:13.174 ********* 2026-03-24 02:14:04.668807 | orchestrator | changed: [testbed-manager] 2026-03-24 02:14:04.668818 | orchestrator | 2026-03-24 02:14:04.668829 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-24 02:14:04.668839 | orchestrator | Tuesday 24 March 2026 02:13:56 +0000 (0:00:00.851) 0:00:14.026 ********* 2026-03-24 02:14:04.668850 | orchestrator | ok: [testbed-manager] 2026-03-24 02:14:04.668860 | orchestrator | 2026-03-24 02:14:04.668872 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-24 02:14:04.668884 | orchestrator | Tuesday 24 March 2026 02:13:57 +0000 (0:00:01.056) 0:00:15.082 ********* 2026-03-24 02:14:04.668894 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:14:04.668905 | orchestrator | 2026-03-24 02:14:04.668916 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-24 02:14:04.668927 | orchestrator | Tuesday 24 March 2026 02:13:57 +0000 (0:00:00.144) 0:00:15.227 ********* 2026-03-24 02:14:04.668960 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:14:04.668973 | orchestrator | 2026-03-24 02:14:04.668984 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-24 02:14:04.668995 | orchestrator | Tuesday 24 March 2026 02:13:57 +0000 (0:00:00.145) 0:00:15.373 ********* 2026-03-24 02:14:04.669005 | orchestrator | changed: [testbed-manager] 2026-03-24 02:14:04.669016 | orchestrator | 2026-03-24 02:14:04.669027 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-24 02:14:04.669038 | orchestrator | Tuesday 24 March 2026 02:13:58 +0000 (0:00:00.894) 0:00:16.267 ********* 2026-03-24 02:14:04.669049 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-24 02:14:04.669060 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-24 02:14:04.669072 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-24 02:14:04.669083 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-24 02:14:04.669093 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-24 02:14:04.669104 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-24 02:14:04.669115 | orchestrator | 2026-03-24 02:14:04.669126 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-24 02:14:04.669137 | orchestrator | Tuesday 24 March 2026 02:14:01 +0000 (0:00:03.571) 0:00:19.839 ********* 2026-03-24 02:14:04.669148 | orchestrator | ok: [testbed-manager] 2026-03-24 02:14:04.669159 | orchestrator | 2026-03-24 02:14:04.669169 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-24 02:14:04.669180 | orchestrator | Tuesday 24 March 2026 02:14:03 +0000 (0:00:01.354) 0:00:21.194 ********* 2026-03-24 02:14:04.669191 | orchestrator | changed: [testbed-manager] 2026-03-24 02:14:04.669201 | orchestrator | 2026-03-24 02:14:04.669212 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:14:04.669223 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:14:04.669234 | orchestrator | 2026-03-24 02:14:04.669248 | orchestrator | 2026-03-24 02:14:04.669277 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:14:04.669296 | orchestrator | Tuesday 24 March 2026 02:14:04 +0000 (0:00:01.257) 0:00:22.452 ********* 2026-03-24 02:14:04.669314 | orchestrator | =============================================================================== 2026-03-24 02:14:04.669331 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.34s 2026-03-24 02:14:04.669347 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.57s 2026-03-24 02:14:04.669364 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.35s 2026-03-24 02:14:04.669382 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.26s 2026-03-24 02:14:04.669423 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.06s 2026-03-24 02:14:04.669470 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.00s 2026-03-24 02:14:04.669490 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.94s 2026-03-24 02:14:04.669508 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.89s 2026-03-24 02:14:04.669527 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.85s 2026-03-24 02:14:04.669544 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.29s 2026-03-24 02:14:04.669563 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-24 02:14:04.669578 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-24 02:14:04.841025 | orchestrator | + osism apply kubernetes 2026-03-24 02:14:06.608828 | orchestrator | 2026-03-24 02:14:06 | INFO  | Task 71567a53-f95f-4a2a-aab8-cc04ae36a0f8 (kubernetes) was prepared for execution. 2026-03-24 02:14:06.608932 | orchestrator | 2026-03-24 02:14:06 | INFO  | It takes a moment until task 71567a53-f95f-4a2a-aab8-cc04ae36a0f8 (kubernetes) has been started and output is visible here. 2026-03-24 02:14:26.909627 | orchestrator | 2026-03-24 02:14:26.909748 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-24 02:14:26.909765 | orchestrator | 2026-03-24 02:14:26.909778 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-24 02:14:26.909790 | orchestrator | Tuesday 24 March 2026 02:14:10 +0000 (0:00:00.116) 0:00:00.116 ********* 2026-03-24 02:14:26.909801 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:14:26.909813 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:14:26.909824 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:14:26.909835 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:14:26.909846 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:14:26.909856 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:14:26.909867 | orchestrator | 2026-03-24 02:14:26.909885 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-24 02:14:26.909903 | orchestrator | Tuesday 24 March 2026 02:14:10 +0000 (0:00:00.560) 0:00:00.677 ********* 2026-03-24 02:14:26.909922 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.909942 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.909960 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.909976 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.909993 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.910010 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.910134 | orchestrator | 2026-03-24 02:14:26.910148 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-24 02:14:26.910162 | orchestrator | Tuesday 24 March 2026 02:14:11 +0000 (0:00:00.513) 0:00:01.190 ********* 2026-03-24 02:14:26.910174 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.910184 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.910195 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.910206 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.910217 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.910228 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.910239 | orchestrator | 2026-03-24 02:14:26.910250 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-24 02:14:26.910262 | orchestrator | Tuesday 24 March 2026 02:14:11 +0000 (0:00:00.546) 0:00:01.737 ********* 2026-03-24 02:14:26.910273 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:14:26.910283 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:14:26.910294 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:14:26.910309 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:14:26.910321 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:14:26.910331 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:14:26.910342 | orchestrator | 2026-03-24 02:14:26.910353 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-24 02:14:26.910365 | orchestrator | Tuesday 24 March 2026 02:14:13 +0000 (0:00:01.356) 0:00:03.093 ********* 2026-03-24 02:14:26.910376 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:14:26.910387 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:14:26.910397 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:14:26.910431 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:14:26.910443 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:14:26.910454 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:14:26.910465 | orchestrator | 2026-03-24 02:14:26.910476 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-24 02:14:26.910487 | orchestrator | Tuesday 24 March 2026 02:14:15 +0000 (0:00:01.869) 0:00:04.962 ********* 2026-03-24 02:14:26.910498 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:14:26.910529 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:14:26.910540 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:14:26.910550 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:14:26.910561 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:14:26.910571 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:14:26.910582 | orchestrator | 2026-03-24 02:14:26.910602 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-24 02:14:26.910613 | orchestrator | Tuesday 24 March 2026 02:14:15 +0000 (0:00:00.851) 0:00:05.814 ********* 2026-03-24 02:14:26.910624 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.910635 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.910646 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.910657 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.910667 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.910678 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.910689 | orchestrator | 2026-03-24 02:14:26.910699 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-24 02:14:26.910710 | orchestrator | Tuesday 24 March 2026 02:14:16 +0000 (0:00:00.455) 0:00:06.269 ********* 2026-03-24 02:14:26.910721 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.910731 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.910742 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.910753 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.910763 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.910774 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.910784 | orchestrator | 2026-03-24 02:14:26.910795 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-24 02:14:26.910806 | orchestrator | Tuesday 24 March 2026 02:14:17 +0000 (0:00:00.584) 0:00:06.853 ********* 2026-03-24 02:14:26.910817 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 02:14:26.910828 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 02:14:26.910839 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.910849 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 02:14:26.910860 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 02:14:26.910871 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.910882 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 02:14:26.910892 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 02:14:26.910903 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.910914 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 02:14:26.910946 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 02:14:26.910957 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.910968 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 02:14:26.910979 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 02:14:26.910990 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.911000 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 02:14:26.911011 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 02:14:26.911022 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.911032 | orchestrator | 2026-03-24 02:14:26.911043 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-24 02:14:26.911054 | orchestrator | Tuesday 24 March 2026 02:14:17 +0000 (0:00:00.498) 0:00:07.351 ********* 2026-03-24 02:14:26.911065 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.911075 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.911086 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.911104 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.911115 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.911126 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.911136 | orchestrator | 2026-03-24 02:14:26.911147 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-24 02:14:26.911159 | orchestrator | Tuesday 24 March 2026 02:14:18 +0000 (0:00:00.947) 0:00:08.298 ********* 2026-03-24 02:14:26.911170 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:14:26.911181 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:14:26.911191 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:14:26.911202 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:14:26.911213 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:14:26.911223 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:14:26.911234 | orchestrator | 2026-03-24 02:14:26.911245 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-24 02:14:26.911255 | orchestrator | Tuesday 24 March 2026 02:14:19 +0000 (0:00:00.670) 0:00:08.968 ********* 2026-03-24 02:14:26.911266 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:14:26.911277 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:14:26.911288 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:14:26.911298 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:14:26.911309 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:14:26.911319 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:14:26.911330 | orchestrator | 2026-03-24 02:14:26.911341 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-24 02:14:26.911352 | orchestrator | Tuesday 24 March 2026 02:14:24 +0000 (0:00:05.109) 0:00:14.078 ********* 2026-03-24 02:14:26.911362 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.911379 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.911390 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.911449 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.911462 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.911473 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.911484 | orchestrator | 2026-03-24 02:14:26.911494 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-24 02:14:26.911505 | orchestrator | Tuesday 24 March 2026 02:14:24 +0000 (0:00:00.633) 0:00:14.711 ********* 2026-03-24 02:14:26.911516 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.911527 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.911537 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.911548 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.911558 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.911569 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.911579 | orchestrator | 2026-03-24 02:14:26.911590 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-24 02:14:26.911603 | orchestrator | Tuesday 24 March 2026 02:14:25 +0000 (0:00:00.946) 0:00:15.658 ********* 2026-03-24 02:14:26.911613 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.911624 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.911635 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.911645 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.911656 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.911666 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.911677 | orchestrator | 2026-03-24 02:14:26.911688 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-24 02:14:26.911698 | orchestrator | Tuesday 24 March 2026 02:14:26 +0000 (0:00:00.465) 0:00:16.124 ********* 2026-03-24 02:14:26.911709 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-24 02:14:26.911727 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-24 02:14:26.911738 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:14:26.911748 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-24 02:14:26.911766 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-24 02:14:26.911777 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:14:26.911787 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-24 02:14:26.911798 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-24 02:14:26.911809 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:14:26.911820 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-24 02:14:26.911830 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-24 02:14:26.911841 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:14:26.911852 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-24 02:14:26.911862 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-24 02:14:26.911873 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:14:26.911884 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-24 02:14:26.911894 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-24 02:14:26.911905 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:14:26.911916 | orchestrator | 2026-03-24 02:14:26.911927 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-24 02:14:26.911944 | orchestrator | Tuesday 24 March 2026 02:14:26 +0000 (0:00:00.619) 0:00:16.743 ********* 2026-03-24 02:15:38.822117 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:15:38.822256 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:15:38.822283 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:15:38.822303 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:15:38.822322 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.822341 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.822359 | orchestrator | 2026-03-24 02:15:38.822381 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-24 02:15:38.822402 | orchestrator | Tuesday 24 March 2026 02:14:27 +0000 (0:00:00.446) 0:00:17.190 ********* 2026-03-24 02:15:38.822449 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:15:38.822468 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:15:38.822486 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:15:38.822504 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:15:38.822523 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.822540 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.822559 | orchestrator | 2026-03-24 02:15:38.822579 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-24 02:15:38.822598 | orchestrator | 2026-03-24 02:15:38.822618 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-24 02:15:38.822639 | orchestrator | Tuesday 24 March 2026 02:14:28 +0000 (0:00:00.942) 0:00:18.133 ********* 2026-03-24 02:15:38.822659 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.822679 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.822699 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.822717 | orchestrator | 2026-03-24 02:15:38.822736 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-24 02:15:38.822756 | orchestrator | Tuesday 24 March 2026 02:14:29 +0000 (0:00:00.940) 0:00:19.073 ********* 2026-03-24 02:15:38.822776 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.822795 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.822814 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.822833 | orchestrator | 2026-03-24 02:15:38.822852 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-24 02:15:38.822871 | orchestrator | Tuesday 24 March 2026 02:14:30 +0000 (0:00:00.991) 0:00:20.065 ********* 2026-03-24 02:15:38.822891 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.822910 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.822929 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.822948 | orchestrator | 2026-03-24 02:15:38.822966 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-24 02:15:38.823020 | orchestrator | Tuesday 24 March 2026 02:14:31 +0000 (0:00:00.911) 0:00:20.976 ********* 2026-03-24 02:15:38.823040 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.823058 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.823076 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.823094 | orchestrator | 2026-03-24 02:15:38.823113 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-24 02:15:38.823132 | orchestrator | Tuesday 24 March 2026 02:14:31 +0000 (0:00:00.673) 0:00:21.649 ********* 2026-03-24 02:15:38.823151 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:15:38.823169 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.823187 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.823204 | orchestrator | 2026-03-24 02:15:38.823223 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-24 02:15:38.823264 | orchestrator | Tuesday 24 March 2026 02:14:32 +0000 (0:00:00.313) 0:00:21.962 ********* 2026-03-24 02:15:38.823282 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:15:38.823301 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:15:38.823318 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:15:38.823337 | orchestrator | 2026-03-24 02:15:38.823357 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-24 02:15:38.823375 | orchestrator | Tuesday 24 March 2026 02:14:32 +0000 (0:00:00.850) 0:00:22.813 ********* 2026-03-24 02:15:38.823394 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:15:38.823436 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:15:38.823457 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:15:38.823476 | orchestrator | 2026-03-24 02:15:38.823495 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-24 02:15:38.823513 | orchestrator | Tuesday 24 March 2026 02:14:34 +0000 (0:00:01.236) 0:00:24.050 ********* 2026-03-24 02:15:38.823531 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:15:38.823549 | orchestrator | 2026-03-24 02:15:38.823568 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-24 02:15:38.823586 | orchestrator | Tuesday 24 March 2026 02:14:34 +0000 (0:00:00.445) 0:00:24.496 ********* 2026-03-24 02:15:38.823605 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.823623 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.823641 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.823660 | orchestrator | 2026-03-24 02:15:38.823677 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-24 02:15:38.823697 | orchestrator | Tuesday 24 March 2026 02:14:35 +0000 (0:00:01.241) 0:00:25.737 ********* 2026-03-24 02:15:38.823716 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.823734 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.823752 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:15:38.823770 | orchestrator | 2026-03-24 02:15:38.823788 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-24 02:15:38.823807 | orchestrator | Tuesday 24 March 2026 02:14:36 +0000 (0:00:00.526) 0:00:26.264 ********* 2026-03-24 02:15:38.823827 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.823845 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.823863 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:15:38.823881 | orchestrator | 2026-03-24 02:15:38.823899 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-24 02:15:38.823918 | orchestrator | Tuesday 24 March 2026 02:14:37 +0000 (0:00:00.987) 0:00:27.251 ********* 2026-03-24 02:15:38.823937 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.823956 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.823974 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:15:38.823992 | orchestrator | 2026-03-24 02:15:38.824010 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-24 02:15:38.824053 | orchestrator | Tuesday 24 March 2026 02:14:38 +0000 (0:00:01.241) 0:00:28.493 ********* 2026-03-24 02:15:38.824074 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:15:38.824105 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.824124 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.824142 | orchestrator | 2026-03-24 02:15:38.824161 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-24 02:15:38.824180 | orchestrator | Tuesday 24 March 2026 02:14:39 +0000 (0:00:00.490) 0:00:28.984 ********* 2026-03-24 02:15:38.824199 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:15:38.824217 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.824235 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.824254 | orchestrator | 2026-03-24 02:15:38.824272 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-24 02:15:38.824291 | orchestrator | Tuesday 24 March 2026 02:14:39 +0000 (0:00:00.310) 0:00:29.294 ********* 2026-03-24 02:15:38.824309 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:15:38.824327 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:15:38.824345 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:15:38.824363 | orchestrator | 2026-03-24 02:15:38.824390 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-24 02:15:38.824410 | orchestrator | Tuesday 24 March 2026 02:14:40 +0000 (0:00:01.144) 0:00:30.439 ********* 2026-03-24 02:15:38.824473 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.824492 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.824512 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.824530 | orchestrator | 2026-03-24 02:15:38.824548 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-24 02:15:38.824567 | orchestrator | Tuesday 24 March 2026 02:14:43 +0000 (0:00:02.752) 0:00:33.192 ********* 2026-03-24 02:15:38.824585 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.824604 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.824623 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.824648 | orchestrator | 2026-03-24 02:15:38.824667 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-24 02:15:38.824687 | orchestrator | Tuesday 24 March 2026 02:14:43 +0000 (0:00:00.314) 0:00:33.506 ********* 2026-03-24 02:15:38.824705 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-24 02:15:38.824726 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-24 02:15:38.824746 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-24 02:15:38.824766 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-24 02:15:38.824784 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-24 02:15:38.824803 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-24 02:15:38.824821 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-24 02:15:38.824839 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-24 02:15:38.824859 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-24 02:15:38.824878 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-24 02:15:38.824896 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-24 02:15:38.824926 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-24 02:15:38.824944 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-24 02:15:38.824963 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-24 02:15:38.824982 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-24 02:15:38.825000 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:15:38.825019 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:15:38.825037 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:15:38.825054 | orchestrator | 2026-03-24 02:15:38.825081 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-24 02:15:38.825101 | orchestrator | Tuesday 24 March 2026 02:15:37 +0000 (0:00:53.913) 0:01:27.419 ********* 2026-03-24 02:15:38.825119 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:15:38.825137 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:15:38.825155 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:15:38.825173 | orchestrator | 2026-03-24 02:15:38.825191 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-24 02:15:38.825210 | orchestrator | Tuesday 24 March 2026 02:15:37 +0000 (0:00:00.287) 0:01:27.707 ********* 2026-03-24 02:15:38.825240 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.582174 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.582291 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.582308 | orchestrator | 2026-03-24 02:16:19.582322 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-24 02:16:19.582335 | orchestrator | Tuesday 24 March 2026 02:15:38 +0000 (0:00:00.953) 0:01:28.660 ********* 2026-03-24 02:16:19.582346 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.582357 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.582368 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.582379 | orchestrator | 2026-03-24 02:16:19.582391 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-24 02:16:19.582402 | orchestrator | Tuesday 24 March 2026 02:15:40 +0000 (0:00:01.254) 0:01:29.914 ********* 2026-03-24 02:16:19.582413 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.582455 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.582466 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.582477 | orchestrator | 2026-03-24 02:16:19.582488 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-24 02:16:19.582499 | orchestrator | Tuesday 24 March 2026 02:16:05 +0000 (0:00:25.488) 0:01:55.402 ********* 2026-03-24 02:16:19.582510 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:16:19.582522 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:16:19.582533 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:16:19.582544 | orchestrator | 2026-03-24 02:16:19.582555 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-24 02:16:19.582566 | orchestrator | Tuesday 24 March 2026 02:16:06 +0000 (0:00:00.613) 0:01:56.015 ********* 2026-03-24 02:16:19.582577 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:16:19.582588 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:16:19.582599 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:16:19.582609 | orchestrator | 2026-03-24 02:16:19.582620 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-24 02:16:19.582631 | orchestrator | Tuesday 24 March 2026 02:16:06 +0000 (0:00:00.616) 0:01:56.632 ********* 2026-03-24 02:16:19.582642 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.582653 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.582664 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.582675 | orchestrator | 2026-03-24 02:16:19.582686 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-24 02:16:19.582720 | orchestrator | Tuesday 24 March 2026 02:16:07 +0000 (0:00:00.576) 0:01:57.208 ********* 2026-03-24 02:16:19.582732 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:16:19.582743 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:16:19.582754 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:16:19.582764 | orchestrator | 2026-03-24 02:16:19.582775 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-24 02:16:19.582786 | orchestrator | Tuesday 24 March 2026 02:16:08 +0000 (0:00:00.755) 0:01:57.963 ********* 2026-03-24 02:16:19.582797 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:16:19.582807 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:16:19.582818 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:16:19.582829 | orchestrator | 2026-03-24 02:16:19.582840 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-24 02:16:19.582851 | orchestrator | Tuesday 24 March 2026 02:16:08 +0000 (0:00:00.299) 0:01:58.262 ********* 2026-03-24 02:16:19.582862 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.582873 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.582883 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.582894 | orchestrator | 2026-03-24 02:16:19.582905 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-24 02:16:19.582916 | orchestrator | Tuesday 24 March 2026 02:16:09 +0000 (0:00:00.678) 0:01:58.941 ********* 2026-03-24 02:16:19.582926 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.582937 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.582948 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.582959 | orchestrator | 2026-03-24 02:16:19.582970 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-24 02:16:19.582981 | orchestrator | Tuesday 24 March 2026 02:16:09 +0000 (0:00:00.664) 0:01:59.606 ********* 2026-03-24 02:16:19.582992 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.583002 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.583013 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.583024 | orchestrator | 2026-03-24 02:16:19.583035 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-24 02:16:19.583046 | orchestrator | Tuesday 24 March 2026 02:16:10 +0000 (0:00:00.862) 0:02:00.468 ********* 2026-03-24 02:16:19.583058 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:16:19.583069 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:16:19.583080 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:16:19.583090 | orchestrator | 2026-03-24 02:16:19.583101 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-24 02:16:19.583112 | orchestrator | Tuesday 24 March 2026 02:16:11 +0000 (0:00:01.109) 0:02:01.577 ********* 2026-03-24 02:16:19.583122 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:16:19.583133 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:16:19.583143 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:16:19.583154 | orchestrator | 2026-03-24 02:16:19.583165 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-24 02:16:19.583175 | orchestrator | Tuesday 24 March 2026 02:16:12 +0000 (0:00:00.275) 0:02:01.853 ********* 2026-03-24 02:16:19.583186 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:16:19.583196 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:16:19.583207 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:16:19.583217 | orchestrator | 2026-03-24 02:16:19.583228 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-24 02:16:19.583239 | orchestrator | Tuesday 24 March 2026 02:16:12 +0000 (0:00:00.281) 0:02:02.135 ********* 2026-03-24 02:16:19.583249 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:16:19.583260 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:16:19.583270 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:16:19.583281 | orchestrator | 2026-03-24 02:16:19.583292 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-24 02:16:19.583302 | orchestrator | Tuesday 24 March 2026 02:16:12 +0000 (0:00:00.600) 0:02:02.735 ********* 2026-03-24 02:16:19.583320 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:16:19.583331 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:16:19.583359 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:16:19.583371 | orchestrator | 2026-03-24 02:16:19.583383 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-24 02:16:19.583395 | orchestrator | Tuesday 24 March 2026 02:16:13 +0000 (0:00:00.812) 0:02:03.547 ********* 2026-03-24 02:16:19.583406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-24 02:16:19.583417 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-24 02:16:19.583480 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-24 02:16:19.583492 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-24 02:16:19.583503 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-24 02:16:19.583514 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-24 02:16:19.583525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-24 02:16:19.583537 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-24 02:16:19.583547 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-24 02:16:19.583559 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-24 02:16:19.583570 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-24 02:16:19.583580 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-24 02:16:19.583591 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-24 02:16:19.583602 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-24 02:16:19.583613 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-24 02:16:19.583624 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-24 02:16:19.583635 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-24 02:16:19.583646 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-24 02:16:19.583657 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-24 02:16:19.583668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-24 02:16:19.583679 | orchestrator | 2026-03-24 02:16:19.583690 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-24 02:16:19.583701 | orchestrator | 2026-03-24 02:16:19.583712 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-24 02:16:19.583723 | orchestrator | Tuesday 24 March 2026 02:16:16 +0000 (0:00:03.019) 0:02:06.567 ********* 2026-03-24 02:16:19.583734 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:16:19.583745 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:16:19.583756 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:16:19.583767 | orchestrator | 2026-03-24 02:16:19.583793 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-24 02:16:19.583805 | orchestrator | Tuesday 24 March 2026 02:16:17 +0000 (0:00:00.317) 0:02:06.884 ********* 2026-03-24 02:16:19.583815 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:16:19.583826 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:16:19.583837 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:16:19.583857 | orchestrator | 2026-03-24 02:16:19.583868 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-24 02:16:19.583879 | orchestrator | Tuesday 24 March 2026 02:16:17 +0000 (0:00:00.831) 0:02:07.716 ********* 2026-03-24 02:16:19.583890 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:16:19.583901 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:16:19.583911 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:16:19.583927 | orchestrator | 2026-03-24 02:16:19.583945 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-24 02:16:19.583964 | orchestrator | Tuesday 24 March 2026 02:16:18 +0000 (0:00:00.332) 0:02:08.049 ********* 2026-03-24 02:16:19.583988 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:16:19.584015 | orchestrator | 2026-03-24 02:16:19.584032 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-24 02:16:19.584050 | orchestrator | Tuesday 24 March 2026 02:16:18 +0000 (0:00:00.449) 0:02:08.498 ********* 2026-03-24 02:16:19.584068 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:16:19.584087 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:16:19.584104 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:16:19.584123 | orchestrator | 2026-03-24 02:16:19.584140 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-24 02:16:19.584159 | orchestrator | Tuesday 24 March 2026 02:16:19 +0000 (0:00:00.464) 0:02:08.963 ********* 2026-03-24 02:16:19.584177 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:16:19.584193 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:16:19.584212 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:16:19.584224 | orchestrator | 2026-03-24 02:16:19.584235 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-24 02:16:19.584246 | orchestrator | Tuesday 24 March 2026 02:16:19 +0000 (0:00:00.291) 0:02:09.255 ********* 2026-03-24 02:16:19.584267 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:17:45.394759 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:17:45.394844 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:17:45.394854 | orchestrator | 2026-03-24 02:17:45.394863 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-24 02:17:45.394871 | orchestrator | Tuesday 24 March 2026 02:16:19 +0000 (0:00:00.302) 0:02:09.557 ********* 2026-03-24 02:17:45.394877 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:17:45.394883 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:17:45.394889 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:17:45.394895 | orchestrator | 2026-03-24 02:17:45.394902 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-24 02:17:45.394908 | orchestrator | Tuesday 24 March 2026 02:16:20 +0000 (0:00:00.630) 0:02:10.188 ********* 2026-03-24 02:17:45.394914 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:17:45.394919 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:17:45.394926 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:17:45.394932 | orchestrator | 2026-03-24 02:17:45.394938 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-24 02:17:45.394946 | orchestrator | Tuesday 24 March 2026 02:16:21 +0000 (0:00:01.499) 0:02:11.688 ********* 2026-03-24 02:17:45.394952 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:17:45.394958 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:17:45.394964 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:17:45.394970 | orchestrator | 2026-03-24 02:17:45.394977 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-24 02:17:45.394983 | orchestrator | Tuesday 24 March 2026 02:16:23 +0000 (0:00:01.243) 0:02:12.931 ********* 2026-03-24 02:17:45.394989 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:17:45.394995 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:17:45.395001 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:17:45.395007 | orchestrator | 2026-03-24 02:17:45.395015 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-24 02:17:45.395043 | orchestrator | 2026-03-24 02:17:45.395050 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-24 02:17:45.395056 | orchestrator | Tuesday 24 March 2026 02:16:33 +0000 (0:00:10.164) 0:02:23.095 ********* 2026-03-24 02:17:45.395063 | orchestrator | ok: [testbed-manager] 2026-03-24 02:17:45.395084 | orchestrator | 2026-03-24 02:17:45.395097 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-24 02:17:45.395104 | orchestrator | Tuesday 24 March 2026 02:16:34 +0000 (0:00:00.778) 0:02:23.874 ********* 2026-03-24 02:17:45.395110 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395116 | orchestrator | 2026-03-24 02:17:45.395123 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-24 02:17:45.395129 | orchestrator | Tuesday 24 March 2026 02:16:34 +0000 (0:00:00.587) 0:02:24.461 ********* 2026-03-24 02:17:45.395136 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-24 02:17:45.395141 | orchestrator | 2026-03-24 02:17:45.395145 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-24 02:17:45.395149 | orchestrator | Tuesday 24 March 2026 02:16:35 +0000 (0:00:00.566) 0:02:25.028 ********* 2026-03-24 02:17:45.395152 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395156 | orchestrator | 2026-03-24 02:17:45.395160 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-24 02:17:45.395164 | orchestrator | Tuesday 24 March 2026 02:16:36 +0000 (0:00:00.842) 0:02:25.871 ********* 2026-03-24 02:17:45.395168 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395171 | orchestrator | 2026-03-24 02:17:45.395175 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-24 02:17:45.395179 | orchestrator | Tuesday 24 March 2026 02:16:36 +0000 (0:00:00.567) 0:02:26.438 ********* 2026-03-24 02:17:45.395183 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-24 02:17:45.395186 | orchestrator | 2026-03-24 02:17:45.395190 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-24 02:17:45.395194 | orchestrator | Tuesday 24 March 2026 02:16:38 +0000 (0:00:01.508) 0:02:27.946 ********* 2026-03-24 02:17:45.395198 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-24 02:17:45.395201 | orchestrator | 2026-03-24 02:17:45.395220 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-24 02:17:45.395224 | orchestrator | Tuesday 24 March 2026 02:16:38 +0000 (0:00:00.784) 0:02:28.731 ********* 2026-03-24 02:17:45.395228 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395234 | orchestrator | 2026-03-24 02:17:45.395240 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-24 02:17:45.395246 | orchestrator | Tuesday 24 March 2026 02:16:39 +0000 (0:00:00.410) 0:02:29.142 ********* 2026-03-24 02:17:45.395252 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395257 | orchestrator | 2026-03-24 02:17:45.395262 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-24 02:17:45.395268 | orchestrator | 2026-03-24 02:17:45.395274 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-24 02:17:45.395280 | orchestrator | Tuesday 24 March 2026 02:16:39 +0000 (0:00:00.458) 0:02:29.601 ********* 2026-03-24 02:17:45.395286 | orchestrator | ok: [testbed-manager] 2026-03-24 02:17:45.395291 | orchestrator | 2026-03-24 02:17:45.395297 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-24 02:17:45.395303 | orchestrator | Tuesday 24 March 2026 02:16:39 +0000 (0:00:00.173) 0:02:29.774 ********* 2026-03-24 02:17:45.395310 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 02:17:45.395317 | orchestrator | 2026-03-24 02:17:45.395324 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-24 02:17:45.395330 | orchestrator | Tuesday 24 March 2026 02:16:40 +0000 (0:00:00.380) 0:02:30.154 ********* 2026-03-24 02:17:45.395338 | orchestrator | ok: [testbed-manager] 2026-03-24 02:17:45.395342 | orchestrator | 2026-03-24 02:17:45.395353 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-24 02:17:45.395357 | orchestrator | Tuesday 24 March 2026 02:16:41 +0000 (0:00:00.792) 0:02:30.947 ********* 2026-03-24 02:17:45.395362 | orchestrator | ok: [testbed-manager] 2026-03-24 02:17:45.395366 | orchestrator | 2026-03-24 02:17:45.395384 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-24 02:17:45.395389 | orchestrator | Tuesday 24 March 2026 02:16:42 +0000 (0:00:01.378) 0:02:32.326 ********* 2026-03-24 02:17:45.395394 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395398 | orchestrator | 2026-03-24 02:17:45.395402 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-24 02:17:45.395406 | orchestrator | Tuesday 24 March 2026 02:16:43 +0000 (0:00:00.762) 0:02:33.088 ********* 2026-03-24 02:17:45.395411 | orchestrator | ok: [testbed-manager] 2026-03-24 02:17:45.395415 | orchestrator | 2026-03-24 02:17:45.395419 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-24 02:17:45.395423 | orchestrator | Tuesday 24 March 2026 02:16:43 +0000 (0:00:00.450) 0:02:33.539 ********* 2026-03-24 02:17:45.395427 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395432 | orchestrator | 2026-03-24 02:17:45.395436 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-24 02:17:45.395502 | orchestrator | Tuesday 24 March 2026 02:16:50 +0000 (0:00:06.441) 0:02:39.980 ********* 2026-03-24 02:17:45.395509 | orchestrator | changed: [testbed-manager] 2026-03-24 02:17:45.395513 | orchestrator | 2026-03-24 02:17:45.395518 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-24 02:17:45.395522 | orchestrator | Tuesday 24 March 2026 02:17:01 +0000 (0:00:11.225) 0:02:51.206 ********* 2026-03-24 02:17:45.395533 | orchestrator | ok: [testbed-manager] 2026-03-24 02:17:45.395537 | orchestrator | 2026-03-24 02:17:45.395542 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-24 02:17:45.395546 | orchestrator | 2026-03-24 02:17:45.395551 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-24 02:17:45.395555 | orchestrator | Tuesday 24 March 2026 02:17:02 +0000 (0:00:00.687) 0:02:51.893 ********* 2026-03-24 02:17:45.395559 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:17:45.395569 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:17:45.395574 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:17:45.395578 | orchestrator | 2026-03-24 02:17:45.395583 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-24 02:17:45.395587 | orchestrator | Tuesday 24 March 2026 02:17:02 +0000 (0:00:00.277) 0:02:52.170 ********* 2026-03-24 02:17:45.395591 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:17:45.395596 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:17:45.395606 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:17:45.395610 | orchestrator | 2026-03-24 02:17:45.395615 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-24 02:17:45.395619 | orchestrator | Tuesday 24 March 2026 02:17:02 +0000 (0:00:00.296) 0:02:52.467 ********* 2026-03-24 02:17:45.395623 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:17:45.395628 | orchestrator | 2026-03-24 02:17:45.395633 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-24 02:17:45.395637 | orchestrator | Tuesday 24 March 2026 02:17:03 +0000 (0:00:00.632) 0:02:53.099 ********* 2026-03-24 02:17:45.395641 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 02:17:45.395645 | orchestrator | 2026-03-24 02:17:45.395650 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-24 02:17:45.395655 | orchestrator | Tuesday 24 March 2026 02:17:04 +0000 (0:00:00.777) 0:02:53.877 ********* 2026-03-24 02:17:45.395659 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:17:45.395663 | orchestrator | 2026-03-24 02:17:45.395668 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-24 02:17:45.395679 | orchestrator | Tuesday 24 March 2026 02:17:04 +0000 (0:00:00.793) 0:02:54.670 ********* 2026-03-24 02:17:45.395683 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:17:45.395688 | orchestrator | 2026-03-24 02:17:45.395692 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-24 02:17:45.395695 | orchestrator | Tuesday 24 March 2026 02:17:04 +0000 (0:00:00.116) 0:02:54.787 ********* 2026-03-24 02:17:45.395699 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:17:45.395703 | orchestrator | 2026-03-24 02:17:45.395707 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-24 02:17:45.395711 | orchestrator | Tuesday 24 March 2026 02:17:05 +0000 (0:00:00.971) 0:02:55.758 ********* 2026-03-24 02:17:45.395714 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:17:45.395718 | orchestrator | 2026-03-24 02:17:45.395722 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-24 02:17:45.395725 | orchestrator | Tuesday 24 March 2026 02:17:06 +0000 (0:00:00.125) 0:02:55.884 ********* 2026-03-24 02:17:45.395729 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:17:45.395733 | orchestrator | 2026-03-24 02:17:45.395737 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-24 02:17:45.395740 | orchestrator | Tuesday 24 March 2026 02:17:06 +0000 (0:00:00.135) 0:02:56.019 ********* 2026-03-24 02:17:45.395744 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:17:45.395748 | orchestrator | 2026-03-24 02:17:45.395751 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-24 02:17:45.395761 | orchestrator | Tuesday 24 March 2026 02:17:06 +0000 (0:00:00.134) 0:02:56.154 ********* 2026-03-24 02:17:45.395765 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:17:45.395768 | orchestrator | 2026-03-24 02:17:45.395772 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-24 02:17:45.395776 | orchestrator | Tuesday 24 March 2026 02:17:06 +0000 (0:00:00.126) 0:02:56.281 ********* 2026-03-24 02:17:45.395779 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 02:17:45.395783 | orchestrator | 2026-03-24 02:17:45.395787 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-24 02:17:45.395791 | orchestrator | Tuesday 24 March 2026 02:17:12 +0000 (0:00:05.633) 0:03:01.914 ********* 2026-03-24 02:17:45.395794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-24 02:17:45.395799 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-24 02:17:45.395807 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-24 02:18:06.331821 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-24 02:18:06.331941 | orchestrator | 2026-03-24 02:18:06.331958 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-24 02:18:06.331970 | orchestrator | Tuesday 24 March 2026 02:17:45 +0000 (0:00:33.320) 0:03:35.235 ********* 2026-03-24 02:18:06.331982 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:18:06.331993 | orchestrator | 2026-03-24 02:18:06.332004 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-24 02:18:06.332016 | orchestrator | Tuesday 24 March 2026 02:17:46 +0000 (0:00:01.159) 0:03:36.394 ********* 2026-03-24 02:18:06.332027 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 02:18:06.332038 | orchestrator | 2026-03-24 02:18:06.332049 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-24 02:18:06.332060 | orchestrator | Tuesday 24 March 2026 02:17:47 +0000 (0:00:01.415) 0:03:37.809 ********* 2026-03-24 02:18:06.332071 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 02:18:06.332081 | orchestrator | 2026-03-24 02:18:06.332093 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-24 02:18:06.332104 | orchestrator | Tuesday 24 March 2026 02:17:49 +0000 (0:00:01.178) 0:03:38.988 ********* 2026-03-24 02:18:06.332115 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:18:06.332126 | orchestrator | 2026-03-24 02:18:06.332165 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-24 02:18:06.332177 | orchestrator | Tuesday 24 March 2026 02:17:49 +0000 (0:00:00.123) 0:03:39.112 ********* 2026-03-24 02:18:06.332188 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-24 02:18:06.332200 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-24 02:18:06.332210 | orchestrator | 2026-03-24 02:18:06.332221 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-24 02:18:06.332232 | orchestrator | Tuesday 24 March 2026 02:17:50 +0000 (0:00:01.704) 0:03:40.817 ********* 2026-03-24 02:18:06.332243 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:18:06.332254 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:18:06.332265 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:18:06.332275 | orchestrator | 2026-03-24 02:18:06.332302 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-24 02:18:06.332314 | orchestrator | Tuesday 24 March 2026 02:17:51 +0000 (0:00:00.289) 0:03:41.107 ********* 2026-03-24 02:18:06.332325 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:18:06.332336 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:18:06.332364 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:18:06.332389 | orchestrator | 2026-03-24 02:18:06.332402 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-24 02:18:06.332415 | orchestrator | 2026-03-24 02:18:06.332428 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-24 02:18:06.332441 | orchestrator | Tuesday 24 March 2026 02:17:52 +0000 (0:00:00.862) 0:03:41.970 ********* 2026-03-24 02:18:06.332482 | orchestrator | ok: [testbed-manager] 2026-03-24 02:18:06.332502 | orchestrator | 2026-03-24 02:18:06.332521 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-24 02:18:06.332543 | orchestrator | Tuesday 24 March 2026 02:17:52 +0000 (0:00:00.297) 0:03:42.267 ********* 2026-03-24 02:18:06.332563 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 02:18:06.332580 | orchestrator | 2026-03-24 02:18:06.332593 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-24 02:18:06.332606 | orchestrator | Tuesday 24 March 2026 02:17:52 +0000 (0:00:00.230) 0:03:42.497 ********* 2026-03-24 02:18:06.332619 | orchestrator | changed: [testbed-manager] 2026-03-24 02:18:06.332631 | orchestrator | 2026-03-24 02:18:06.332644 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-24 02:18:06.332657 | orchestrator | 2026-03-24 02:18:06.332670 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-24 02:18:06.332682 | orchestrator | Tuesday 24 March 2026 02:17:57 +0000 (0:00:05.095) 0:03:47.593 ********* 2026-03-24 02:18:06.332695 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:18:06.332707 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:18:06.332720 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:18:06.332731 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:18:06.332741 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:18:06.332752 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:18:06.332763 | orchestrator | 2026-03-24 02:18:06.332774 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-24 02:18:06.332785 | orchestrator | Tuesday 24 March 2026 02:17:58 +0000 (0:00:00.547) 0:03:48.141 ********* 2026-03-24 02:18:06.332795 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-24 02:18:06.332806 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-24 02:18:06.332817 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-24 02:18:06.332828 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-24 02:18:06.332838 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-24 02:18:06.332858 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-24 02:18:06.332869 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-24 02:18:06.332880 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-24 02:18:06.332891 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-24 02:18:06.332902 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-24 02:18:06.332929 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-24 02:18:06.332941 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-24 02:18:06.332953 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-24 02:18:06.332964 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-24 02:18:06.332974 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-24 02:18:06.332985 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-24 02:18:06.333016 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-24 02:18:06.333034 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-24 02:18:06.333053 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-24 02:18:06.333067 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-24 02:18:06.333085 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-24 02:18:06.333097 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-24 02:18:06.333107 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-24 02:18:06.333118 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-24 02:18:06.333129 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-24 02:18:06.333140 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-24 02:18:06.333150 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-24 02:18:06.333161 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-24 02:18:06.333171 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-24 02:18:06.333182 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-24 02:18:06.333193 | orchestrator | 2026-03-24 02:18:06.333204 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-24 02:18:06.333214 | orchestrator | Tuesday 24 March 2026 02:18:05 +0000 (0:00:07.060) 0:03:55.201 ********* 2026-03-24 02:18:06.333225 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:18:06.333236 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:18:06.333247 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:18:06.333258 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:18:06.333268 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:18:06.333279 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:18:06.333290 | orchestrator | 2026-03-24 02:18:06.333300 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-24 02:18:06.333311 | orchestrator | Tuesday 24 March 2026 02:18:05 +0000 (0:00:00.471) 0:03:55.672 ********* 2026-03-24 02:18:06.333322 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:18:06.333332 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:18:06.333343 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:18:06.333360 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:18:06.333371 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:18:06.333381 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:18:06.333392 | orchestrator | 2026-03-24 02:18:06.333403 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:18:06.333414 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:18:06.333428 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-24 02:18:06.333439 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-24 02:18:06.333450 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-24 02:18:06.333523 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 02:18:06.333535 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 02:18:06.333545 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 02:18:06.333556 | orchestrator | 2026-03-24 02:18:06.333567 | orchestrator | 2026-03-24 02:18:06.333578 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:18:06.333588 | orchestrator | Tuesday 24 March 2026 02:18:06 +0000 (0:00:00.496) 0:03:56.168 ********* 2026-03-24 02:18:06.333599 | orchestrator | =============================================================================== 2026-03-24 02:18:06.333618 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.91s 2026-03-24 02:18:06.545269 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 33.32s 2026-03-24 02:18:06.545362 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.49s 2026-03-24 02:18:06.545374 | orchestrator | kubectl : Install required packages ------------------------------------ 11.23s 2026-03-24 02:18:06.545385 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.16s 2026-03-24 02:18:06.545399 | orchestrator | Manage labels ----------------------------------------------------------- 7.06s 2026-03-24 02:18:06.545412 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.44s 2026-03-24 02:18:06.545424 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.63s 2026-03-24 02:18:06.545438 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.11s 2026-03-24 02:18:06.545451 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.10s 2026-03-24 02:18:06.545565 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.02s 2026-03-24 02:18:06.545580 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.75s 2026-03-24 02:18:06.545593 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.87s 2026-03-24 02:18:06.545606 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.70s 2026-03-24 02:18:06.545617 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2026-03-24 02:18:06.545629 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.50s 2026-03-24 02:18:06.545641 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.42s 2026-03-24 02:18:06.545654 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.38s 2026-03-24 02:18:06.545694 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.36s 2026-03-24 02:18:06.545708 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.25s 2026-03-24 02:18:06.727565 | orchestrator | + osism apply copy-kubeconfig 2026-03-24 02:18:18.621436 | orchestrator | 2026-03-24 02:18:18 | INFO  | Task 7930f748-d2ec-43a7-b2f1-27a96222713d (copy-kubeconfig) was prepared for execution. 2026-03-24 02:18:18.621585 | orchestrator | 2026-03-24 02:18:18 | INFO  | It takes a moment until task 7930f748-d2ec-43a7-b2f1-27a96222713d (copy-kubeconfig) has been started and output is visible here. 2026-03-24 02:18:24.870066 | orchestrator | 2026-03-24 02:18:24.870170 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-24 02:18:24.870183 | orchestrator | 2026-03-24 02:18:24.870193 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-24 02:18:24.870202 | orchestrator | Tuesday 24 March 2026 02:18:22 +0000 (0:00:00.117) 0:00:00.117 ********* 2026-03-24 02:18:24.870211 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-24 02:18:24.870219 | orchestrator | 2026-03-24 02:18:24.870227 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-24 02:18:24.870235 | orchestrator | Tuesday 24 March 2026 02:18:23 +0000 (0:00:00.707) 0:00:00.824 ********* 2026-03-24 02:18:24.870243 | orchestrator | changed: [testbed-manager] 2026-03-24 02:18:24.870252 | orchestrator | 2026-03-24 02:18:24.870277 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-24 02:18:24.870286 | orchestrator | Tuesday 24 March 2026 02:18:24 +0000 (0:00:01.016) 0:00:01.841 ********* 2026-03-24 02:18:24.870294 | orchestrator | changed: [testbed-manager] 2026-03-24 02:18:24.870302 | orchestrator | 2026-03-24 02:18:24.870310 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:18:24.870326 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:18:24.870336 | orchestrator | 2026-03-24 02:18:24.870344 | orchestrator | 2026-03-24 02:18:24.870352 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:18:24.870360 | orchestrator | Tuesday 24 March 2026 02:18:24 +0000 (0:00:00.401) 0:00:02.242 ********* 2026-03-24 02:18:24.870368 | orchestrator | =============================================================================== 2026-03-24 02:18:24.870375 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.02s 2026-03-24 02:18:24.870383 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2026-03-24 02:18:24.870391 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2026-03-24 02:18:25.048749 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-24 02:18:36.775846 | orchestrator | 2026-03-24 02:18:36 | INFO  | Task b90a98ba-1071-48ac-933b-2597f4549f7f (openstackclient) was prepared for execution. 2026-03-24 02:18:36.775971 | orchestrator | 2026-03-24 02:18:36 | INFO  | It takes a moment until task b90a98ba-1071-48ac-933b-2597f4549f7f (openstackclient) has been started and output is visible here. 2026-03-24 02:19:20.483931 | orchestrator | 2026-03-24 02:19:20.484062 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-24 02:19:20.484092 | orchestrator | 2026-03-24 02:19:20.484114 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-24 02:19:20.484134 | orchestrator | Tuesday 24 March 2026 02:18:40 +0000 (0:00:00.218) 0:00:00.218 ********* 2026-03-24 02:19:20.484147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-24 02:19:20.484160 | orchestrator | 2026-03-24 02:19:20.484171 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-24 02:19:20.484183 | orchestrator | Tuesday 24 March 2026 02:18:41 +0000 (0:00:00.226) 0:00:00.444 ********* 2026-03-24 02:19:20.484251 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-24 02:19:20.484265 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-24 02:19:20.484276 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-24 02:19:20.484287 | orchestrator | 2026-03-24 02:19:20.484298 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-24 02:19:20.484309 | orchestrator | Tuesday 24 March 2026 02:18:42 +0000 (0:00:01.057) 0:00:01.502 ********* 2026-03-24 02:19:20.484320 | orchestrator | changed: [testbed-manager] 2026-03-24 02:19:20.484331 | orchestrator | 2026-03-24 02:19:20.484342 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-24 02:19:20.484353 | orchestrator | Tuesday 24 March 2026 02:18:43 +0000 (0:00:01.224) 0:00:02.726 ********* 2026-03-24 02:19:20.484364 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-24 02:19:20.484375 | orchestrator | ok: [testbed-manager] 2026-03-24 02:19:20.484388 | orchestrator | 2026-03-24 02:19:20.484399 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-24 02:19:20.484409 | orchestrator | Tuesday 24 March 2026 02:19:15 +0000 (0:00:32.320) 0:00:35.046 ********* 2026-03-24 02:19:20.484420 | orchestrator | changed: [testbed-manager] 2026-03-24 02:19:20.484431 | orchestrator | 2026-03-24 02:19:20.484442 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-24 02:19:20.484452 | orchestrator | Tuesday 24 March 2026 02:19:16 +0000 (0:00:00.879) 0:00:35.926 ********* 2026-03-24 02:19:20.484486 | orchestrator | ok: [testbed-manager] 2026-03-24 02:19:20.484499 | orchestrator | 2026-03-24 02:19:20.484512 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-24 02:19:20.484532 | orchestrator | Tuesday 24 March 2026 02:19:17 +0000 (0:00:00.603) 0:00:36.529 ********* 2026-03-24 02:19:20.484551 | orchestrator | changed: [testbed-manager] 2026-03-24 02:19:20.484571 | orchestrator | 2026-03-24 02:19:20.484590 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-24 02:19:20.484611 | orchestrator | Tuesday 24 March 2026 02:19:18 +0000 (0:00:01.289) 0:00:37.819 ********* 2026-03-24 02:19:20.484630 | orchestrator | changed: [testbed-manager] 2026-03-24 02:19:20.484673 | orchestrator | 2026-03-24 02:19:20.484696 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-24 02:19:20.484714 | orchestrator | Tuesday 24 March 2026 02:19:19 +0000 (0:00:00.638) 0:00:38.458 ********* 2026-03-24 02:19:20.484734 | orchestrator | changed: [testbed-manager] 2026-03-24 02:19:20.484753 | orchestrator | 2026-03-24 02:19:20.484773 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-24 02:19:20.484792 | orchestrator | Tuesday 24 March 2026 02:19:19 +0000 (0:00:00.592) 0:00:39.050 ********* 2026-03-24 02:19:20.484812 | orchestrator | ok: [testbed-manager] 2026-03-24 02:19:20.484830 | orchestrator | 2026-03-24 02:19:20.484850 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:19:20.484868 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:19:20.484889 | orchestrator | 2026-03-24 02:19:20.484908 | orchestrator | 2026-03-24 02:19:20.484925 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:19:20.484941 | orchestrator | Tuesday 24 March 2026 02:19:20 +0000 (0:00:00.387) 0:00:39.438 ********* 2026-03-24 02:19:20.484951 | orchestrator | =============================================================================== 2026-03-24 02:19:20.484962 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.32s 2026-03-24 02:19:20.484973 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.29s 2026-03-24 02:19:20.484984 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.22s 2026-03-24 02:19:20.485006 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.06s 2026-03-24 02:19:20.485017 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.88s 2026-03-24 02:19:20.485027 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.64s 2026-03-24 02:19:20.485038 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.60s 2026-03-24 02:19:20.485049 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-03-24 02:19:20.485059 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2026-03-24 02:19:20.485070 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2026-03-24 02:19:22.662858 | orchestrator | 2026-03-24 02:19:22 | INFO  | Task 2ec14fe4-0cda-4959-be9e-40e6c61d1074 (common) was prepared for execution. 2026-03-24 02:19:22.662985 | orchestrator | 2026-03-24 02:19:22 | INFO  | It takes a moment until task 2ec14fe4-0cda-4959-be9e-40e6c61d1074 (common) has been started and output is visible here. 2026-03-24 02:19:34.302318 | orchestrator | 2026-03-24 02:19:34.302405 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-24 02:19:34.302414 | orchestrator | 2026-03-24 02:19:34.302421 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-24 02:19:34.302427 | orchestrator | Tuesday 24 March 2026 02:19:26 +0000 (0:00:00.261) 0:00:00.261 ********* 2026-03-24 02:19:34.302434 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:19:34.302441 | orchestrator | 2026-03-24 02:19:34.302447 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-24 02:19:34.302452 | orchestrator | Tuesday 24 March 2026 02:19:27 +0000 (0:00:01.224) 0:00:01.485 ********* 2026-03-24 02:19:34.302458 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 02:19:34.302463 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 02:19:34.302470 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 02:19:34.302475 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 02:19:34.302481 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 02:19:34.302486 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 02:19:34.302492 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 02:19:34.302497 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 02:19:34.302503 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 02:19:34.302508 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 02:19:34.302531 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 02:19:34.302544 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 02:19:34.302557 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 02:19:34.302566 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 02:19:34.302574 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 02:19:34.302582 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 02:19:34.302591 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 02:19:34.302600 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 02:19:34.302626 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 02:19:34.302635 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 02:19:34.302643 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 02:19:34.302651 | orchestrator | 2026-03-24 02:19:34.302660 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-24 02:19:34.302668 | orchestrator | Tuesday 24 March 2026 02:19:30 +0000 (0:00:02.526) 0:00:04.011 ********* 2026-03-24 02:19:34.302677 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:19:34.302759 | orchestrator | 2026-03-24 02:19:34.302768 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-24 02:19:34.302773 | orchestrator | Tuesday 24 March 2026 02:19:31 +0000 (0:00:01.254) 0:00:05.266 ********* 2026-03-24 02:19:34.302787 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:34.302796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:34.302820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:34.302827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:34.302833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:34.302838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:34.302850 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:34.302856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:34.302862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:34.302877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519588 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:35.519761 | orchestrator | 2026-03-24 02:19:35.519772 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-24 02:19:35.519782 | orchestrator | Tuesday 24 March 2026 02:19:35 +0000 (0:00:03.576) 0:00:08.843 ********* 2026-03-24 02:19:35.519794 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:35.519804 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:35.519813 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:35.519822 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:19:35.519831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:35.519853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060626 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:19:36.060747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.060769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060794 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:19:36.060806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.060830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060854 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:19:36.060886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.060909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060933 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:19:36.060944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.060956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.060978 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:19:36.060990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.061013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860309 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:19:36.860325 | orchestrator | 2026-03-24 02:19:36.860336 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-24 02:19:36.860347 | orchestrator | Tuesday 24 March 2026 02:19:36 +0000 (0:00:00.846) 0:00:09.690 ********* 2026-03-24 02:19:36.860357 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.860370 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860378 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860387 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:19:36.860414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.860429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860469 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:19:36.860502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.860512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860530 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:19:36.860538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.860548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:36.860572 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:19:36.860581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:36.860614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:41.568515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:41.568602 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:19:41.568615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:41.568625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:41.568634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:41.568642 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:19:41.568650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 02:19:41.568658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:41.568684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:41.568692 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:19:41.568700 | orchestrator | 2026-03-24 02:19:41.568766 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-24 02:19:41.568777 | orchestrator | Tuesday 24 March 2026 02:19:37 +0000 (0:00:01.638) 0:00:11.328 ********* 2026-03-24 02:19:41.568785 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:19:41.568792 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:19:41.568800 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:19:41.568807 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:19:41.568827 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:19:41.568835 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:19:41.568854 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:19:41.568862 | orchestrator | 2026-03-24 02:19:41.568869 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-24 02:19:41.568876 | orchestrator | Tuesday 24 March 2026 02:19:38 +0000 (0:00:00.677) 0:00:12.005 ********* 2026-03-24 02:19:41.568884 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:19:41.568891 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:19:41.568898 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:19:41.568905 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:19:41.568912 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:19:41.568920 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:19:41.568927 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:19:41.568934 | orchestrator | 2026-03-24 02:19:41.568942 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-24 02:19:41.568949 | orchestrator | Tuesday 24 March 2026 02:19:39 +0000 (0:00:00.768) 0:00:12.774 ********* 2026-03-24 02:19:41.568957 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:41.568979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:41.568987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:41.569002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:41.569013 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:41.569021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:41.569040 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:44.214430 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214629 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214659 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:44.214854 | orchestrator | 2026-03-24 02:19:44.214867 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-24 02:19:44.214879 | orchestrator | Tuesday 24 March 2026 02:19:42 +0000 (0:00:03.320) 0:00:16.095 ********* 2026-03-24 02:19:44.214891 | orchestrator | [WARNING]: Skipped 2026-03-24 02:19:44.214903 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-24 02:19:44.214914 | orchestrator | to this access issue: 2026-03-24 02:19:44.214927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-24 02:19:44.214938 | orchestrator | directory 2026-03-24 02:19:44.214949 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 02:19:44.214961 | orchestrator | 2026-03-24 02:19:44.214974 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-24 02:19:44.214987 | orchestrator | Tuesday 24 March 2026 02:19:43 +0000 (0:00:00.876) 0:00:16.972 ********* 2026-03-24 02:19:44.214999 | orchestrator | [WARNING]: Skipped 2026-03-24 02:19:44.215019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-24 02:19:53.348183 | orchestrator | to this access issue: 2026-03-24 02:19:53.348266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-24 02:19:53.348274 | orchestrator | directory 2026-03-24 02:19:53.348279 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 02:19:53.348284 | orchestrator | 2026-03-24 02:19:53.348289 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-24 02:19:53.348294 | orchestrator | Tuesday 24 March 2026 02:19:44 +0000 (0:00:01.108) 0:00:18.080 ********* 2026-03-24 02:19:53.348299 | orchestrator | [WARNING]: Skipped 2026-03-24 02:19:53.348325 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-24 02:19:53.348331 | orchestrator | to this access issue: 2026-03-24 02:19:53.348340 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-24 02:19:53.348348 | orchestrator | directory 2026-03-24 02:19:53.348354 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 02:19:53.348360 | orchestrator | 2026-03-24 02:19:53.348366 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-24 02:19:53.348372 | orchestrator | Tuesday 24 March 2026 02:19:45 +0000 (0:00:00.799) 0:00:18.880 ********* 2026-03-24 02:19:53.348378 | orchestrator | [WARNING]: Skipped 2026-03-24 02:19:53.348384 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-24 02:19:53.348390 | orchestrator | to this access issue: 2026-03-24 02:19:53.348396 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-24 02:19:53.348402 | orchestrator | directory 2026-03-24 02:19:53.348408 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 02:19:53.348414 | orchestrator | 2026-03-24 02:19:53.348420 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-24 02:19:53.348426 | orchestrator | Tuesday 24 March 2026 02:19:46 +0000 (0:00:00.792) 0:00:19.672 ********* 2026-03-24 02:19:53.348432 | orchestrator | changed: [testbed-manager] 2026-03-24 02:19:53.348438 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:19:53.348443 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:19:53.348449 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:19:53.348454 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:19:53.348475 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:19:53.348481 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:19:53.348487 | orchestrator | 2026-03-24 02:19:53.348494 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-24 02:19:53.348499 | orchestrator | Tuesday 24 March 2026 02:19:48 +0000 (0:00:02.423) 0:00:22.096 ********* 2026-03-24 02:19:53.348503 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 02:19:53.348508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 02:19:53.348512 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 02:19:53.348516 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 02:19:53.348520 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 02:19:53.348524 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 02:19:53.348527 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 02:19:53.348531 | orchestrator | 2026-03-24 02:19:53.348537 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-24 02:19:53.348542 | orchestrator | Tuesday 24 March 2026 02:19:50 +0000 (0:00:02.029) 0:00:24.125 ********* 2026-03-24 02:19:53.348545 | orchestrator | changed: [testbed-manager] 2026-03-24 02:19:53.348549 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:19:53.348553 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:19:53.348556 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:19:53.348560 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:19:53.348564 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:19:53.348567 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:19:53.348571 | orchestrator | 2026-03-24 02:19:53.348575 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-24 02:19:53.348579 | orchestrator | Tuesday 24 March 2026 02:19:52 +0000 (0:00:01.811) 0:00:25.937 ********* 2026-03-24 02:19:53.348584 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:53.348609 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:53.348624 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:53.348628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:53.348638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:53.348643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:53.348659 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:53.348667 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:53.348680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:53.348693 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.280229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:59.280326 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.280337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:59.280362 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.280389 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.280396 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.280403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:19:59.280432 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.280439 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.280445 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.280451 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.280458 | orchestrator | 2026-03-24 02:19:59.280466 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-24 02:19:59.280474 | orchestrator | Tuesday 24 March 2026 02:19:53 +0000 (0:00:01.490) 0:00:27.427 ********* 2026-03-24 02:19:59.280481 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 02:19:59.280496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 02:19:59.280501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 02:19:59.280512 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 02:19:59.280516 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 02:19:59.280520 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 02:19:59.280524 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 02:19:59.280528 | orchestrator | 2026-03-24 02:19:59.280532 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-24 02:19:59.280536 | orchestrator | Tuesday 24 March 2026 02:19:55 +0000 (0:00:01.876) 0:00:29.304 ********* 2026-03-24 02:19:59.280540 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 02:19:59.280545 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 02:19:59.280554 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 02:19:59.280558 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 02:19:59.280562 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 02:19:59.280565 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 02:19:59.280569 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 02:19:59.280573 | orchestrator | 2026-03-24 02:19:59.280577 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-24 02:19:59.280581 | orchestrator | Tuesday 24 March 2026 02:19:57 +0000 (0:00:01.655) 0:00:30.959 ********* 2026-03-24 02:19:59.280584 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.280596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.869714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.869847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.869875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.869892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.869898 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869904 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 02:19:59.869910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869960 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:19:59.869985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:21:15.037418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:21:15.037566 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:21:15.037583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:21:15.037610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:21:15.037623 | orchestrator | 2026-03-24 02:21:15.037636 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-24 02:21:15.037648 | orchestrator | Tuesday 24 March 2026 02:19:59 +0000 (0:00:02.540) 0:00:33.499 ********* 2026-03-24 02:21:15.037659 | orchestrator | changed: [testbed-manager] 2026-03-24 02:21:15.037671 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:21:15.037681 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:21:15.037699 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:21:15.037718 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:21:15.037737 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:21:15.037755 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:21:15.037774 | orchestrator | 2026-03-24 02:21:15.037792 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-24 02:21:15.037811 | orchestrator | Tuesday 24 March 2026 02:20:01 +0000 (0:00:01.342) 0:00:34.842 ********* 2026-03-24 02:21:15.037830 | orchestrator | changed: [testbed-manager] 2026-03-24 02:21:15.037849 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:21:15.037868 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:21:15.037880 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:21:15.037891 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:21:15.037901 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:21:15.037912 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:21:15.037923 | orchestrator | 2026-03-24 02:21:15.037934 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 02:21:15.038007 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:01.040) 0:00:35.883 ********* 2026-03-24 02:21:15.038083 | orchestrator | 2026-03-24 02:21:15.038095 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 02:21:15.038106 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:00.061) 0:00:35.945 ********* 2026-03-24 02:21:15.038140 | orchestrator | 2026-03-24 02:21:15.038151 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 02:21:15.038162 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:00.061) 0:00:36.007 ********* 2026-03-24 02:21:15.038173 | orchestrator | 2026-03-24 02:21:15.038184 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 02:21:15.038194 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:00.060) 0:00:36.068 ********* 2026-03-24 02:21:15.038205 | orchestrator | 2026-03-24 02:21:15.038216 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 02:21:15.038227 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:00.202) 0:00:36.270 ********* 2026-03-24 02:21:15.038249 | orchestrator | 2026-03-24 02:21:15.038260 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 02:21:15.038271 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:00.063) 0:00:36.334 ********* 2026-03-24 02:21:15.038282 | orchestrator | 2026-03-24 02:21:15.038292 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 02:21:15.038304 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:00.058) 0:00:36.393 ********* 2026-03-24 02:21:15.038314 | orchestrator | 2026-03-24 02:21:15.038325 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-24 02:21:15.038336 | orchestrator | Tuesday 24 March 2026 02:20:02 +0000 (0:00:00.090) 0:00:36.483 ********* 2026-03-24 02:21:15.038347 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:21:15.038358 | orchestrator | changed: [testbed-manager] 2026-03-24 02:21:15.038369 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:21:15.038379 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:21:15.038390 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:21:15.038421 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:21:15.038433 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:21:15.038444 | orchestrator | 2026-03-24 02:21:15.038455 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-24 02:21:15.038466 | orchestrator | Tuesday 24 March 2026 02:20:32 +0000 (0:00:29.757) 0:01:06.240 ********* 2026-03-24 02:21:15.038477 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:21:15.038488 | orchestrator | changed: [testbed-manager] 2026-03-24 02:21:15.038499 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:21:15.038517 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:21:15.038535 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:21:15.038553 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:21:15.038571 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:21:15.038588 | orchestrator | 2026-03-24 02:21:15.038606 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-24 02:21:15.038624 | orchestrator | Tuesday 24 March 2026 02:21:04 +0000 (0:00:32.090) 0:01:38.331 ********* 2026-03-24 02:21:15.038642 | orchestrator | ok: [testbed-manager] 2026-03-24 02:21:15.038661 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:21:15.038680 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:21:15.038699 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:21:15.038716 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:21:15.038735 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:21:15.038749 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:21:15.038760 | orchestrator | 2026-03-24 02:21:15.038770 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-24 02:21:15.038802 | orchestrator | Tuesday 24 March 2026 02:21:06 +0000 (0:00:01.824) 0:01:40.156 ********* 2026-03-24 02:21:15.038813 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:21:15.038824 | orchestrator | changed: [testbed-manager] 2026-03-24 02:21:15.038835 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:21:15.038845 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:21:15.038856 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:21:15.038867 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:21:15.038877 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:21:15.038888 | orchestrator | 2026-03-24 02:21:15.038899 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:21:15.038911 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 02:21:15.038937 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 02:21:15.038969 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 02:21:15.038981 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 02:21:15.039001 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 02:21:15.039012 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 02:21:15.039023 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 02:21:15.039034 | orchestrator | 2026-03-24 02:21:15.039045 | orchestrator | 2026-03-24 02:21:15.039056 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:21:15.039067 | orchestrator | Tuesday 24 March 2026 02:21:14 +0000 (0:00:08.477) 0:01:48.634 ********* 2026-03-24 02:21:15.039078 | orchestrator | =============================================================================== 2026-03-24 02:21:15.039089 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.09s 2026-03-24 02:21:15.039100 | orchestrator | common : Restart fluentd container ------------------------------------- 29.76s 2026-03-24 02:21:15.039111 | orchestrator | common : Restart cron container ----------------------------------------- 8.48s 2026-03-24 02:21:15.039121 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.58s 2026-03-24 02:21:15.039132 | orchestrator | common : Copying over config.json files for services -------------------- 3.32s 2026-03-24 02:21:15.039143 | orchestrator | common : Check common containers ---------------------------------------- 2.54s 2026-03-24 02:21:15.039153 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.53s 2026-03-24 02:21:15.039164 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.42s 2026-03-24 02:21:15.039175 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.03s 2026-03-24 02:21:15.039186 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.88s 2026-03-24 02:21:15.039197 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.83s 2026-03-24 02:21:15.039208 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.81s 2026-03-24 02:21:15.039218 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.66s 2026-03-24 02:21:15.039229 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.64s 2026-03-24 02:21:15.039240 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.49s 2026-03-24 02:21:15.039251 | orchestrator | common : Creating log volume -------------------------------------------- 1.34s 2026-03-24 02:21:15.039273 | orchestrator | common : include_tasks -------------------------------------------------- 1.25s 2026-03-24 02:21:15.392054 | orchestrator | common : include_tasks -------------------------------------------------- 1.22s 2026-03-24 02:21:15.392186 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.11s 2026-03-24 02:21:15.392227 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.04s 2026-03-24 02:21:17.670285 | orchestrator | 2026-03-24 02:21:17 | INFO  | Task 9b6cc136-5676-404a-8d7a-10bf057aea4a (loadbalancer) was prepared for execution. 2026-03-24 02:21:17.670379 | orchestrator | 2026-03-24 02:21:17 | INFO  | It takes a moment until task 9b6cc136-5676-404a-8d7a-10bf057aea4a (loadbalancer) has been started and output is visible here. 2026-03-24 02:21:33.126146 | orchestrator | 2026-03-24 02:21:33.126254 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:21:33.126270 | orchestrator | 2026-03-24 02:21:33.126281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:21:33.126291 | orchestrator | Tuesday 24 March 2026 02:21:21 +0000 (0:00:00.250) 0:00:00.250 ********* 2026-03-24 02:21:33.126301 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:21:33.126338 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:21:33.126349 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:21:33.126358 | orchestrator | 2026-03-24 02:21:33.126382 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:21:33.126392 | orchestrator | Tuesday 24 March 2026 02:21:22 +0000 (0:00:00.264) 0:00:00.515 ********* 2026-03-24 02:21:33.126402 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-24 02:21:33.126412 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-24 02:21:33.126421 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-24 02:21:33.126431 | orchestrator | 2026-03-24 02:21:33.126441 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-24 02:21:33.126450 | orchestrator | 2026-03-24 02:21:33.126460 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-24 02:21:33.126469 | orchestrator | Tuesday 24 March 2026 02:21:22 +0000 (0:00:00.413) 0:00:00.928 ********* 2026-03-24 02:21:33.126496 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:21:33.126513 | orchestrator | 2026-03-24 02:21:33.126529 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-24 02:21:33.126550 | orchestrator | Tuesday 24 March 2026 02:21:22 +0000 (0:00:00.496) 0:00:01.425 ********* 2026-03-24 02:21:33.126571 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:21:33.126586 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:21:33.126602 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:21:33.126618 | orchestrator | 2026-03-24 02:21:33.126634 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-24 02:21:33.126650 | orchestrator | Tuesday 24 March 2026 02:21:23 +0000 (0:00:00.585) 0:00:02.010 ********* 2026-03-24 02:21:33.126667 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:21:33.126683 | orchestrator | 2026-03-24 02:21:33.126699 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-24 02:21:33.126713 | orchestrator | Tuesday 24 March 2026 02:21:24 +0000 (0:00:00.595) 0:00:02.605 ********* 2026-03-24 02:21:33.126730 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:21:33.126745 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:21:33.126761 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:21:33.126778 | orchestrator | 2026-03-24 02:21:33.126789 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-24 02:21:33.126799 | orchestrator | Tuesday 24 March 2026 02:21:24 +0000 (0:00:00.604) 0:00:03.210 ********* 2026-03-24 02:21:33.126808 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-24 02:21:33.126818 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-24 02:21:33.126827 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-24 02:21:33.126836 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-24 02:21:33.126846 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-24 02:21:33.126856 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-24 02:21:33.126866 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-24 02:21:33.126875 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-24 02:21:33.126885 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-24 02:21:33.126894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-24 02:21:33.126903 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-24 02:21:33.126913 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-24 02:21:33.126933 | orchestrator | 2026-03-24 02:21:33.126943 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-24 02:21:33.126952 | orchestrator | Tuesday 24 March 2026 02:21:28 +0000 (0:00:04.146) 0:00:07.356 ********* 2026-03-24 02:21:33.126962 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-24 02:21:33.126972 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-24 02:21:33.127023 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-24 02:21:33.127034 | orchestrator | 2026-03-24 02:21:33.127043 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-24 02:21:33.127053 | orchestrator | Tuesday 24 March 2026 02:21:29 +0000 (0:00:00.727) 0:00:08.084 ********* 2026-03-24 02:21:33.127063 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-24 02:21:33.127072 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-24 02:21:33.127082 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-24 02:21:33.127091 | orchestrator | 2026-03-24 02:21:33.127101 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-24 02:21:33.127110 | orchestrator | Tuesday 24 March 2026 02:21:30 +0000 (0:00:01.211) 0:00:09.295 ********* 2026-03-24 02:21:33.127120 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-24 02:21:33.127130 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:21:33.127158 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-24 02:21:33.127168 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:21:33.127178 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-24 02:21:33.127187 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:21:33.127196 | orchestrator | 2026-03-24 02:21:33.127206 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-24 02:21:33.127215 | orchestrator | Tuesday 24 March 2026 02:21:31 +0000 (0:00:00.475) 0:00:09.771 ********* 2026-03-24 02:21:33.127228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:33.127252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:33.127263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:33.127280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:33.127291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:33.127308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:38.068431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:21:38.068526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:21:38.068536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:21:38.068543 | orchestrator | 2026-03-24 02:21:38.068551 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-24 02:21:38.068559 | orchestrator | Tuesday 24 March 2026 02:21:33 +0000 (0:00:01.797) 0:00:11.569 ********* 2026-03-24 02:21:38.068566 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:21:38.068573 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:21:38.068579 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:21:38.068604 | orchestrator | 2026-03-24 02:21:38.068611 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-24 02:21:38.068617 | orchestrator | Tuesday 24 March 2026 02:21:33 +0000 (0:00:00.855) 0:00:12.424 ********* 2026-03-24 02:21:38.068624 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-24 02:21:38.068631 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-24 02:21:38.068637 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-24 02:21:38.068643 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-24 02:21:38.068649 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-24 02:21:38.068655 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-24 02:21:38.068662 | orchestrator | 2026-03-24 02:21:38.068668 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-24 02:21:38.068674 | orchestrator | Tuesday 24 March 2026 02:21:35 +0000 (0:00:01.423) 0:00:13.847 ********* 2026-03-24 02:21:38.068680 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:21:38.068687 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:21:38.068693 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:21:38.068699 | orchestrator | 2026-03-24 02:21:38.068706 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-24 02:21:38.068712 | orchestrator | Tuesday 24 March 2026 02:21:36 +0000 (0:00:00.868) 0:00:14.715 ********* 2026-03-24 02:21:38.068718 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:21:38.068725 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:21:38.068731 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:21:38.068737 | orchestrator | 2026-03-24 02:21:38.068744 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-24 02:21:38.068750 | orchestrator | Tuesday 24 March 2026 02:21:37 +0000 (0:00:01.230) 0:00:15.946 ********* 2026-03-24 02:21:38.068757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:21:38.068778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:21:38.068786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:21:38.068794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 02:21:38.068806 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:21:38.068817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:21:38.068865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:21:38.068878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:21:38.068888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 02:21:38.068898 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:21:38.068916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:21:40.742390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:21:40.742498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:21:40.742509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 02:21:40.742518 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:21:40.742527 | orchestrator | 2026-03-24 02:21:40.742536 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-24 02:21:40.742545 | orchestrator | Tuesday 24 March 2026 02:21:38 +0000 (0:00:00.566) 0:00:16.513 ********* 2026-03-24 02:21:40.742553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:40.742561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:40.742569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:40.742609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:40.742619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:21:40.742627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 02:21:40.742635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:40.742642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:21:40.742650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 02:21:40.742671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:48.955721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:21:48.955871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120', '__omit_place_holder__fdf57fd59f05b8ae41f1d8e255048ff21bb76120'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 02:21:48.955900 | orchestrator | 2026-03-24 02:21:48.955922 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-24 02:21:48.955943 | orchestrator | Tuesday 24 March 2026 02:21:40 +0000 (0:00:02.668) 0:00:19.182 ********* 2026-03-24 02:21:48.955963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:48.955984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:48.956005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 02:21:48.956127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:48.956190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:48.956213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:21:48.956234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:21:48.956254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:21:48.956274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:21:48.956292 | orchestrator | 2026-03-24 02:21:48.956312 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-24 02:21:48.956331 | orchestrator | Tuesday 24 March 2026 02:21:43 +0000 (0:00:03.193) 0:00:22.375 ********* 2026-03-24 02:21:48.956351 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-24 02:21:48.956388 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-24 02:21:48.956406 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-24 02:21:48.956426 | orchestrator | 2026-03-24 02:21:48.956445 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-24 02:21:48.956465 | orchestrator | Tuesday 24 March 2026 02:21:45 +0000 (0:00:01.803) 0:00:24.179 ********* 2026-03-24 02:21:48.956485 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-24 02:21:48.956503 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-24 02:21:48.956522 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-24 02:21:48.956541 | orchestrator | 2026-03-24 02:21:48.956562 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-24 02:21:48.956582 | orchestrator | Tuesday 24 March 2026 02:21:48 +0000 (0:00:02.705) 0:00:26.884 ********* 2026-03-24 02:21:48.956601 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:21:48.956617 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:21:48.956627 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:21:48.956637 | orchestrator | 2026-03-24 02:21:48.956659 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-24 02:22:00.054808 | orchestrator | Tuesday 24 March 2026 02:21:48 +0000 (0:00:00.516) 0:00:27.401 ********* 2026-03-24 02:22:00.054956 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-24 02:22:00.054999 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-24 02:22:00.055012 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-24 02:22:00.055024 | orchestrator | 2026-03-24 02:22:00.055036 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-24 02:22:00.055113 | orchestrator | Tuesday 24 March 2026 02:21:50 +0000 (0:00:01.944) 0:00:29.346 ********* 2026-03-24 02:22:00.055126 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-24 02:22:00.055137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-24 02:22:00.055148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-24 02:22:00.055160 | orchestrator | 2026-03-24 02:22:00.055171 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-24 02:22:00.055182 | orchestrator | Tuesday 24 March 2026 02:21:52 +0000 (0:00:01.972) 0:00:31.319 ********* 2026-03-24 02:22:00.055193 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-24 02:22:00.055205 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-24 02:22:00.055216 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-24 02:22:00.055227 | orchestrator | 2026-03-24 02:22:00.055252 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-24 02:22:00.055264 | orchestrator | Tuesday 24 March 2026 02:21:54 +0000 (0:00:01.458) 0:00:32.778 ********* 2026-03-24 02:22:00.055275 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-24 02:22:00.055287 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-24 02:22:00.055298 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-24 02:22:00.055309 | orchestrator | 2026-03-24 02:22:00.055320 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-24 02:22:00.055331 | orchestrator | Tuesday 24 March 2026 02:21:55 +0000 (0:00:01.415) 0:00:34.193 ********* 2026-03-24 02:22:00.055367 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:22:00.055382 | orchestrator | 2026-03-24 02:22:00.055395 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-24 02:22:00.055408 | orchestrator | Tuesday 24 March 2026 02:21:56 +0000 (0:00:00.466) 0:00:34.660 ********* 2026-03-24 02:22:00.055423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 02:22:00.055441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 02:22:00.055455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 02:22:00.055496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:22:00.055511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:22:00.055524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:22:00.055547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:22:00.055560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:22:00.055574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:22:00.055587 | orchestrator | 2026-03-24 02:22:00.055601 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-24 02:22:00.055612 | orchestrator | Tuesday 24 March 2026 02:21:59 +0000 (0:00:03.356) 0:00:38.017 ********* 2026-03-24 02:22:00.055639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:00.722422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:00.722547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:00.722605 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:00.722630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:00.722650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:00.722669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:00.722688 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:00.722705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:00.722766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:00.722789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:00.722823 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:00.722840 | orchestrator | 2026-03-24 02:22:00.722859 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-24 02:22:00.722878 | orchestrator | Tuesday 24 March 2026 02:22:00 +0000 (0:00:00.485) 0:00:38.502 ********* 2026-03-24 02:22:00.722895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:00.722914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:00.722931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:00.722949 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:00.722966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:00.723006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:01.395930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:01.396143 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:01.396167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:01.396213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:01.396226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:01.396238 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:01.396249 | orchestrator | 2026-03-24 02:22:01.396262 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-24 02:22:01.396275 | orchestrator | Tuesday 24 March 2026 02:22:00 +0000 (0:00:00.665) 0:00:39.167 ********* 2026-03-24 02:22:01.396286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:01.396299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:01.396328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:01.396349 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:01.396361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:01.396373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:01.396384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:01.396395 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:01.396407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:01.396435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:01.396452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:01.396480 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:02.520846 | orchestrator | 2026-03-24 02:22:02.520971 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-24 02:22:02.520991 | orchestrator | Tuesday 24 March 2026 02:22:01 +0000 (0:00:00.668) 0:00:39.836 ********* 2026-03-24 02:22:02.521008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:02.521024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:02.521037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:02.521217 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:02.521232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:02.521243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:02.521279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:02.521309 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:02.521343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:02.521357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:02.521369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:02.521380 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:02.521391 | orchestrator | 2026-03-24 02:22:02.521403 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-24 02:22:02.521415 | orchestrator | Tuesday 24 March 2026 02:22:01 +0000 (0:00:00.477) 0:00:40.314 ********* 2026-03-24 02:22:02.521427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:02.521441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:02.521459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:02.521513 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:02.521556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:03.297906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:03.298149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:03.298177 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:03.298192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:03.298205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:03.298217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:03.298254 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:03.298266 | orchestrator | 2026-03-24 02:22:03.298279 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-24 02:22:03.298291 | orchestrator | Tuesday 24 March 2026 02:22:02 +0000 (0:00:00.654) 0:00:40.968 ********* 2026-03-24 02:22:03.298317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:03.298351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:03.298363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:03.298374 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:03.298385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:03.298397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:03.298409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:03.298429 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:03.298448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:03.298469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:04.519804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:04.519896 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:04.519909 | orchestrator | 2026-03-24 02:22:04.519920 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-24 02:22:04.519930 | orchestrator | Tuesday 24 March 2026 02:22:03 +0000 (0:00:00.770) 0:00:41.739 ********* 2026-03-24 02:22:04.519940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:04.519951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:04.519981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:04.519991 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:04.520001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:04.520027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:04.520132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:04.520155 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:04.520169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:04.520182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:04.520196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:04.520224 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:04.520241 | orchestrator | 2026-03-24 02:22:04.520256 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-24 02:22:04.520271 | orchestrator | Tuesday 24 March 2026 02:22:03 +0000 (0:00:00.562) 0:00:42.302 ********* 2026-03-24 02:22:04.520284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 02:22:04.520294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:04.520321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:10.823780 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:10.823875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 02:22:10.823884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:10.823889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:10.823908 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:10.823912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 02:22:10.823916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 02:22:10.823930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 02:22:10.823935 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:10.823939 | orchestrator | 2026-03-24 02:22:10.823944 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-24 02:22:10.823949 | orchestrator | Tuesday 24 March 2026 02:22:04 +0000 (0:00:00.663) 0:00:42.966 ********* 2026-03-24 02:22:10.823953 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-24 02:22:10.823967 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-24 02:22:10.823971 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-24 02:22:10.823975 | orchestrator | 2026-03-24 02:22:10.823979 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-24 02:22:10.823985 | orchestrator | Tuesday 24 March 2026 02:22:05 +0000 (0:00:01.473) 0:00:44.440 ********* 2026-03-24 02:22:10.823992 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-24 02:22:10.823999 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-24 02:22:10.824004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-24 02:22:10.824010 | orchestrator | 2026-03-24 02:22:10.824016 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-24 02:22:10.824021 | orchestrator | Tuesday 24 March 2026 02:22:07 +0000 (0:00:01.621) 0:00:46.061 ********* 2026-03-24 02:22:10.824032 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 02:22:10.824038 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 02:22:10.824045 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 02:22:10.824051 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 02:22:10.824058 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:10.824081 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 02:22:10.824087 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:10.824093 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 02:22:10.824099 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:10.824105 | orchestrator | 2026-03-24 02:22:10.824111 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-24 02:22:10.824117 | orchestrator | Tuesday 24 March 2026 02:22:08 +0000 (0:00:00.730) 0:00:46.792 ********* 2026-03-24 02:22:10.824123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 02:22:10.824130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 02:22:10.824141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 02:22:10.824152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:22:14.699694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:22:14.699791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 02:22:14.699800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:22:14.699807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:22:14.699812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 02:22:14.699817 | orchestrator | 2026-03-24 02:22:14.699824 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-24 02:22:14.699841 | orchestrator | Tuesday 24 March 2026 02:22:10 +0000 (0:00:02.475) 0:00:49.268 ********* 2026-03-24 02:22:14.699846 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:22:14.699851 | orchestrator | 2026-03-24 02:22:14.699857 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-24 02:22:14.699862 | orchestrator | Tuesday 24 March 2026 02:22:11 +0000 (0:00:00.726) 0:00:49.994 ********* 2026-03-24 02:22:14.699879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 02:22:14.699890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 02:22:14.699896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:14.699901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 02:22:14.699906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 02:22:14.699914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 02:22:14.699919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:14.699933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 02:22:15.290958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 02:22:15.291058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 02:22:15.291108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:15.291139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 02:22:15.291153 | orchestrator | 2026-03-24 02:22:15.291167 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-24 02:22:15.291180 | orchestrator | Tuesday 24 March 2026 02:22:14 +0000 (0:00:03.147) 0:00:53.142 ********* 2026-03-24 02:22:15.291192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 02:22:15.291245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 02:22:15.291258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:15.291270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 02:22:15.291282 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:15.291294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 02:22:15.291312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 02:22:15.291332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:15.291344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 02:22:15.291377 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:15.291399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 02:22:23.206806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 02:22:23.206910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.206925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.206961 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:23.206973 | orchestrator | 2026-03-24 02:22:23.206985 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-24 02:22:23.206996 | orchestrator | Tuesday 24 March 2026 02:22:15 +0000 (0:00:00.590) 0:00:53.733 ********* 2026-03-24 02:22:23.207007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-24 02:22:23.207019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-24 02:22:23.207031 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:23.207087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-24 02:22:23.207169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-24 02:22:23.207181 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:23.207190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-24 02:22:23.207201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-24 02:22:23.207210 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:23.207220 | orchestrator | 2026-03-24 02:22:23.207230 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-24 02:22:23.207240 | orchestrator | Tuesday 24 March 2026 02:22:16 +0000 (0:00:01.000) 0:00:54.733 ********* 2026-03-24 02:22:23.207249 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:23.207259 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:23.207268 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:23.207278 | orchestrator | 2026-03-24 02:22:23.207287 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-24 02:22:23.207297 | orchestrator | Tuesday 24 March 2026 02:22:17 +0000 (0:00:01.267) 0:00:56.000 ********* 2026-03-24 02:22:23.207307 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:23.207317 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:23.207326 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:23.207335 | orchestrator | 2026-03-24 02:22:23.207346 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-24 02:22:23.207357 | orchestrator | Tuesday 24 March 2026 02:22:19 +0000 (0:00:01.878) 0:00:57.879 ********* 2026-03-24 02:22:23.207368 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:22:23.207379 | orchestrator | 2026-03-24 02:22:23.207407 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-24 02:22:23.207419 | orchestrator | Tuesday 24 March 2026 02:22:19 +0000 (0:00:00.564) 0:00:58.443 ********* 2026-03-24 02:22:23.207434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 02:22:23.207462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.207488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.207503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 02:22:23.207515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.207536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.755316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 02:22:23.755461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.755480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.755495 | orchestrator | 2026-03-24 02:22:23.755517 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-24 02:22:23.755537 | orchestrator | Tuesday 24 March 2026 02:22:23 +0000 (0:00:03.204) 0:01:01.647 ********* 2026-03-24 02:22:23.755557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 02:22:23.755579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.755637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.755655 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:23.755674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 02:22:23.755686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.755699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:23.755710 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:23.755721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 02:22:23.755743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 02:22:32.624761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:32.624876 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:32.624893 | orchestrator | 2026-03-24 02:22:32.624904 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-24 02:22:32.624916 | orchestrator | Tuesday 24 March 2026 02:22:23 +0000 (0:00:00.549) 0:01:02.197 ********* 2026-03-24 02:22:32.624943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-24 02:22:32.624956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-24 02:22:32.624967 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:32.624977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-24 02:22:32.624988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-24 02:22:32.624998 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:32.625008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-24 02:22:32.625018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-24 02:22:32.625027 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:32.625037 | orchestrator | 2026-03-24 02:22:32.625047 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-24 02:22:32.625057 | orchestrator | Tuesday 24 March 2026 02:22:24 +0000 (0:00:00.751) 0:01:02.949 ********* 2026-03-24 02:22:32.625070 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:32.625086 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:32.625172 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:32.625194 | orchestrator | 2026-03-24 02:22:32.625224 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-24 02:22:32.625240 | orchestrator | Tuesday 24 March 2026 02:22:25 +0000 (0:00:01.462) 0:01:04.411 ********* 2026-03-24 02:22:32.625256 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:32.625272 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:32.625314 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:32.625332 | orchestrator | 2026-03-24 02:22:32.625350 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-24 02:22:32.625362 | orchestrator | Tuesday 24 March 2026 02:22:27 +0000 (0:00:01.896) 0:01:06.307 ********* 2026-03-24 02:22:32.625373 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:32.625384 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:32.625395 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:32.625407 | orchestrator | 2026-03-24 02:22:32.625416 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-24 02:22:32.625426 | orchestrator | Tuesday 24 March 2026 02:22:28 +0000 (0:00:00.297) 0:01:06.605 ********* 2026-03-24 02:22:32.625442 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:22:32.625466 | orchestrator | 2026-03-24 02:22:32.625484 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-24 02:22:32.625499 | orchestrator | Tuesday 24 March 2026 02:22:28 +0000 (0:00:00.580) 0:01:07.186 ********* 2026-03-24 02:22:32.625544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-24 02:22:32.625574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-24 02:22:32.625586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-24 02:22:32.625597 | orchestrator | 2026-03-24 02:22:32.625606 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-24 02:22:32.625617 | orchestrator | Tuesday 24 March 2026 02:22:31 +0000 (0:00:02.649) 0:01:09.836 ********* 2026-03-24 02:22:32.625627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-24 02:22:32.625646 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:32.625657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-24 02:22:32.625667 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:32.625684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-24 02:22:39.626074 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:39.626180 | orchestrator | 2026-03-24 02:22:39.626191 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-24 02:22:39.626199 | orchestrator | Tuesday 24 March 2026 02:22:32 +0000 (0:00:01.231) 0:01:11.068 ********* 2026-03-24 02:22:39.626220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 02:22:39.626229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 02:22:39.626237 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:39.626243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 02:22:39.626263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 02:22:39.626269 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:39.626275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 02:22:39.626281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 02:22:39.626286 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:39.626292 | orchestrator | 2026-03-24 02:22:39.626298 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-24 02:22:39.626303 | orchestrator | Tuesday 24 March 2026 02:22:34 +0000 (0:00:01.517) 0:01:12.585 ********* 2026-03-24 02:22:39.626309 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:39.626314 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:39.626320 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:39.626325 | orchestrator | 2026-03-24 02:22:39.626331 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-24 02:22:39.626341 | orchestrator | Tuesday 24 March 2026 02:22:34 +0000 (0:00:00.402) 0:01:12.988 ********* 2026-03-24 02:22:39.626351 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:39.626360 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:39.626369 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:39.626378 | orchestrator | 2026-03-24 02:22:39.626387 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-24 02:22:39.626397 | orchestrator | Tuesday 24 March 2026 02:22:35 +0000 (0:00:01.172) 0:01:14.160 ********* 2026-03-24 02:22:39.626406 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:22:39.626417 | orchestrator | 2026-03-24 02:22:39.626426 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-24 02:22:39.626436 | orchestrator | Tuesday 24 March 2026 02:22:36 +0000 (0:00:00.844) 0:01:15.005 ********* 2026-03-24 02:22:39.626472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 02:22:39.626489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:22:39.626497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 02:22:39.626504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 02:22:39.626511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 02:22:39.626521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 02:22:40.241264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241312 | orchestrator | 2026-03-24 02:22:40.241327 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-24 02:22:40.241337 | orchestrator | Tuesday 24 March 2026 02:22:39 +0000 (0:00:03.141) 0:01:18.146 ********* 2026-03-24 02:22:40.241346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 02:22:40.241355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 02:22:40.241381 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:40.241396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 02:22:46.218752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:22:46.218872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 02:22:46.218890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 02:22:46.218902 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:46.218915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 02:22:46.218926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:22:46.218984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 02:22:46.218997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 02:22:46.219007 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:46.219045 | orchestrator | 2026-03-24 02:22:46.219057 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-24 02:22:46.219069 | orchestrator | Tuesday 24 March 2026 02:22:40 +0000 (0:00:00.651) 0:01:18.798 ********* 2026-03-24 02:22:46.219079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-24 02:22:46.219090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-24 02:22:46.219101 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:46.219111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-24 02:22:46.219120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-24 02:22:46.219130 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:46.219219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-24 02:22:46.219233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-24 02:22:46.219243 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:46.219253 | orchestrator | 2026-03-24 02:22:46.219265 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-24 02:22:46.219276 | orchestrator | Tuesday 24 March 2026 02:22:41 +0000 (0:00:01.162) 0:01:19.960 ********* 2026-03-24 02:22:46.219288 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:46.219299 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:46.219310 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:46.219331 | orchestrator | 2026-03-24 02:22:46.219342 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-24 02:22:46.219354 | orchestrator | Tuesday 24 March 2026 02:22:42 +0000 (0:00:01.305) 0:01:21.266 ********* 2026-03-24 02:22:46.219365 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:46.219376 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:46.219387 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:46.219398 | orchestrator | 2026-03-24 02:22:46.219410 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-24 02:22:46.219421 | orchestrator | Tuesday 24 March 2026 02:22:44 +0000 (0:00:01.909) 0:01:23.175 ********* 2026-03-24 02:22:46.219432 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:46.219442 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:46.219454 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:46.219464 | orchestrator | 2026-03-24 02:22:46.219475 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-24 02:22:46.219486 | orchestrator | Tuesday 24 March 2026 02:22:45 +0000 (0:00:00.292) 0:01:23.468 ********* 2026-03-24 02:22:46.219497 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:46.219508 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:46.219519 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:46.219530 | orchestrator | 2026-03-24 02:22:46.219541 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-24 02:22:46.219552 | orchestrator | Tuesday 24 March 2026 02:22:45 +0000 (0:00:00.280) 0:01:23.748 ********* 2026-03-24 02:22:46.219563 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:22:46.219574 | orchestrator | 2026-03-24 02:22:46.219584 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-24 02:22:46.219593 | orchestrator | Tuesday 24 March 2026 02:22:46 +0000 (0:00:00.915) 0:01:24.663 ********* 2026-03-24 02:22:49.339565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 02:22:49.339693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 02:22:49.339711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 02:22:49.339835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 02:22:49.339854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 02:22:49.339900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 02:22:50.134413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 02:22:50.134456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134533 | orchestrator | 2026-03-24 02:22:50.134543 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-24 02:22:50.134557 | orchestrator | Tuesday 24 March 2026 02:22:49 +0000 (0:00:03.320) 0:01:27.984 ********* 2026-03-24 02:22:50.134566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 02:22:50.134574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 02:22:50.134581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.134603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.522437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.522561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.522576 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:50.522590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 02:22:50.522602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 02:22:50.523111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.523140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.523235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.523256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.523269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.523277 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:50.523286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 02:22:50.523295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 02:22:50.523303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 02:22:50.523319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 02:22:59.673818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 02:22:59.673901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:22:59.673910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 02:22:59.673915 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:59.673920 | orchestrator | 2026-03-24 02:22:59.673925 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-24 02:22:59.673930 | orchestrator | Tuesday 24 March 2026 02:22:50 +0000 (0:00:00.983) 0:01:28.967 ********* 2026-03-24 02:22:59.673934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-24 02:22:59.673940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-24 02:22:59.673945 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:59.673949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-24 02:22:59.673953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-24 02:22:59.673957 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:59.673960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-24 02:22:59.673964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-24 02:22:59.673981 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:59.673985 | orchestrator | 2026-03-24 02:22:59.673989 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-24 02:22:59.673993 | orchestrator | Tuesday 24 March 2026 02:22:51 +0000 (0:00:01.144) 0:01:30.112 ********* 2026-03-24 02:22:59.673997 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:59.674001 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:59.674005 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:59.674008 | orchestrator | 2026-03-24 02:22:59.674012 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-24 02:22:59.674044 | orchestrator | Tuesday 24 March 2026 02:22:52 +0000 (0:00:01.261) 0:01:31.373 ********* 2026-03-24 02:22:59.674048 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:22:59.674051 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:22:59.674055 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:22:59.674068 | orchestrator | 2026-03-24 02:22:59.674072 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-24 02:22:59.674076 | orchestrator | Tuesday 24 March 2026 02:22:54 +0000 (0:00:01.914) 0:01:33.287 ********* 2026-03-24 02:22:59.674094 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:22:59.674098 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:22:59.674102 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:22:59.674106 | orchestrator | 2026-03-24 02:22:59.674110 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-24 02:22:59.674154 | orchestrator | Tuesday 24 March 2026 02:22:55 +0000 (0:00:00.279) 0:01:33.567 ********* 2026-03-24 02:22:59.674158 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:22:59.674162 | orchestrator | 2026-03-24 02:22:59.674202 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-24 02:22:59.674207 | orchestrator | Tuesday 24 March 2026 02:22:56 +0000 (0:00:00.913) 0:01:34.481 ********* 2026-03-24 02:22:59.674217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 02:22:59.674223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 02:22:59.674240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 02:23:02.342484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 02:23:02.342640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 02:23:02.342680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 02:23:02.342701 | orchestrator | 2026-03-24 02:23:02.342713 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-24 02:23:02.342725 | orchestrator | Tuesday 24 March 2026 02:22:59 +0000 (0:00:03.741) 0:01:38.222 ********* 2026-03-24 02:23:02.342737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 02:23:02.342762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 02:23:05.614599 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:05.614696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 02:23:05.614727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 02:23:05.614755 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:05.614782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 02:23:05.614796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 02:23:05.614819 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:05.614827 | orchestrator | 2026-03-24 02:23:05.614837 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-24 02:23:05.614846 | orchestrator | Tuesday 24 March 2026 02:23:02 +0000 (0:00:02.671) 0:01:40.893 ********* 2026-03-24 02:23:05.614855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 02:23:05.614871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 02:23:13.584482 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:13.584584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 02:23:13.584601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 02:23:13.584612 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:13.584622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 02:23:13.584646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 02:23:13.584656 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:13.584665 | orchestrator | 2026-03-24 02:23:13.584676 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-24 02:23:13.584687 | orchestrator | Tuesday 24 March 2026 02:23:05 +0000 (0:00:03.162) 0:01:44.056 ********* 2026-03-24 02:23:13.584696 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:13.584724 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:13.584733 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:13.584742 | orchestrator | 2026-03-24 02:23:13.584751 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-24 02:23:13.584760 | orchestrator | Tuesday 24 March 2026 02:23:06 +0000 (0:00:01.332) 0:01:45.389 ********* 2026-03-24 02:23:13.584768 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:13.584777 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:13.584786 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:13.584794 | orchestrator | 2026-03-24 02:23:13.584803 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-24 02:23:13.584812 | orchestrator | Tuesday 24 March 2026 02:23:08 +0000 (0:00:01.942) 0:01:47.331 ********* 2026-03-24 02:23:13.584820 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:13.584829 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:13.584838 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:13.584846 | orchestrator | 2026-03-24 02:23:13.584855 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-24 02:23:13.584864 | orchestrator | Tuesday 24 March 2026 02:23:09 +0000 (0:00:00.277) 0:01:47.609 ********* 2026-03-24 02:23:13.584873 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:23:13.584882 | orchestrator | 2026-03-24 02:23:13.584890 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-24 02:23:13.584899 | orchestrator | Tuesday 24 March 2026 02:23:10 +0000 (0:00:00.959) 0:01:48.569 ********* 2026-03-24 02:23:13.584924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 02:23:13.584937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 02:23:13.584946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 02:23:13.584957 | orchestrator | 2026-03-24 02:23:13.584973 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-24 02:23:13.584989 | orchestrator | Tuesday 24 March 2026 02:23:13 +0000 (0:00:02.916) 0:01:51.485 ********* 2026-03-24 02:23:13.585006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 02:23:13.585031 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:13.585049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 02:23:13.585066 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:13.585244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 02:23:13.585273 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:13.585284 | orchestrator | 2026-03-24 02:23:13.585294 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-24 02:23:13.585305 | orchestrator | Tuesday 24 March 2026 02:23:13 +0000 (0:00:00.360) 0:01:51.845 ********* 2026-03-24 02:23:13.585316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-24 02:23:13.585338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-24 02:23:21.857954 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:21.858123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-24 02:23:21.858139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-24 02:23:21.858150 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:21.858159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-24 02:23:21.858168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-24 02:23:21.858198 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:21.858206 | orchestrator | 2026-03-24 02:23:21.858279 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-24 02:23:21.858289 | orchestrator | Tuesday 24 March 2026 02:23:14 +0000 (0:00:00.786) 0:01:52.632 ********* 2026-03-24 02:23:21.858297 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:21.858305 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:21.858313 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:21.858321 | orchestrator | 2026-03-24 02:23:21.858329 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-24 02:23:21.858337 | orchestrator | Tuesday 24 March 2026 02:23:15 +0000 (0:00:01.312) 0:01:53.944 ********* 2026-03-24 02:23:21.858345 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:21.858365 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:21.858373 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:21.858381 | orchestrator | 2026-03-24 02:23:21.858397 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-24 02:23:21.858406 | orchestrator | Tuesday 24 March 2026 02:23:17 +0000 (0:00:01.941) 0:01:55.885 ********* 2026-03-24 02:23:21.858413 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:21.858436 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:21.858450 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:21.858470 | orchestrator | 2026-03-24 02:23:21.858483 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-24 02:23:21.858496 | orchestrator | Tuesday 24 March 2026 02:23:17 +0000 (0:00:00.300) 0:01:56.186 ********* 2026-03-24 02:23:21.858509 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:23:21.858523 | orchestrator | 2026-03-24 02:23:21.858537 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-24 02:23:21.858551 | orchestrator | Tuesday 24 March 2026 02:23:18 +0000 (0:00:00.997) 0:01:57.183 ********* 2026-03-24 02:23:21.858596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 02:23:21.858639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 02:23:21.858661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 02:23:23.334704 | orchestrator | 2026-03-24 02:23:23.334812 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-24 02:23:23.334830 | orchestrator | Tuesday 24 March 2026 02:23:21 +0000 (0:00:03.120) 0:02:00.304 ********* 2026-03-24 02:23:23.334868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 02:23:23.334885 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:23.334919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 02:23:23.334952 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:23.334971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 02:23:23.334984 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:23.334995 | orchestrator | 2026-03-24 02:23:23.335007 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-24 02:23:23.335018 | orchestrator | Tuesday 24 March 2026 02:23:22 +0000 (0:00:00.599) 0:02:00.904 ********* 2026-03-24 02:23:23.335030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-24 02:23:23.335044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 02:23:23.335065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-24 02:23:23.335085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 02:23:31.597045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-24 02:23:31.597162 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:31.597182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-24 02:23:31.597198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 02:23:31.597256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-24 02:23:31.597270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 02:23:31.597283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-24 02:23:31.597294 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:31.597306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-24 02:23:31.597318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 02:23:31.597329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-24 02:23:31.597361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 02:23:31.597372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-24 02:23:31.597383 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:31.597394 | orchestrator | 2026-03-24 02:23:31.597407 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-24 02:23:31.597419 | orchestrator | Tuesday 24 March 2026 02:23:23 +0000 (0:00:00.876) 0:02:01.780 ********* 2026-03-24 02:23:31.597430 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:31.597441 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:31.597452 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:31.597462 | orchestrator | 2026-03-24 02:23:31.597474 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-24 02:23:31.597485 | orchestrator | Tuesday 24 March 2026 02:23:24 +0000 (0:00:01.614) 0:02:03.394 ********* 2026-03-24 02:23:31.597496 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:31.597507 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:31.597518 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:31.597529 | orchestrator | 2026-03-24 02:23:31.597540 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-24 02:23:31.597551 | orchestrator | Tuesday 24 March 2026 02:23:26 +0000 (0:00:01.935) 0:02:05.329 ********* 2026-03-24 02:23:31.597569 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:31.597590 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:31.597632 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:31.597651 | orchestrator | 2026-03-24 02:23:31.597670 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-24 02:23:31.597690 | orchestrator | Tuesday 24 March 2026 02:23:27 +0000 (0:00:00.296) 0:02:05.626 ********* 2026-03-24 02:23:31.597710 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:31.597730 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:31.597751 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:31.597769 | orchestrator | 2026-03-24 02:23:31.597789 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-24 02:23:31.597804 | orchestrator | Tuesday 24 March 2026 02:23:27 +0000 (0:00:00.282) 0:02:05.909 ********* 2026-03-24 02:23:31.597815 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:23:31.597826 | orchestrator | 2026-03-24 02:23:31.597837 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-24 02:23:31.597847 | orchestrator | Tuesday 24 March 2026 02:23:28 +0000 (0:00:01.120) 0:02:07.029 ********* 2026-03-24 02:23:31.597870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:23:31.597888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:23:31.597909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:23:31.597922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:23:31.597945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:23:32.150160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:23:32.150327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:23:32.150390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:23:32.150412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:23:32.150431 | orchestrator | 2026-03-24 02:23:32.150450 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-24 02:23:32.150468 | orchestrator | Tuesday 24 March 2026 02:23:31 +0000 (0:00:03.012) 0:02:10.041 ********* 2026-03-24 02:23:32.150511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:23:32.150542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:23:32.150559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:23:32.150588 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:32.150605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:23:32.150621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:23:32.150638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:23:32.150653 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:32.150686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:23:40.816985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:23:40.817109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:23:40.817124 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:40.817136 | orchestrator | 2026-03-24 02:23:40.817147 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-24 02:23:40.817159 | orchestrator | Tuesday 24 March 2026 02:23:32 +0000 (0:00:00.546) 0:02:10.588 ********* 2026-03-24 02:23:40.817171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-24 02:23:40.817184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-24 02:23:40.817196 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:40.817206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-24 02:23:40.817217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-24 02:23:40.817226 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:40.817236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-24 02:23:40.817246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-24 02:23:40.817312 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:40.817323 | orchestrator | 2026-03-24 02:23:40.817332 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-24 02:23:40.817342 | orchestrator | Tuesday 24 March 2026 02:23:33 +0000 (0:00:00.935) 0:02:11.523 ********* 2026-03-24 02:23:40.817351 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:40.817362 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:40.817372 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:40.817381 | orchestrator | 2026-03-24 02:23:40.817391 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-24 02:23:40.817411 | orchestrator | Tuesday 24 March 2026 02:23:34 +0000 (0:00:01.344) 0:02:12.868 ********* 2026-03-24 02:23:40.817451 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:40.817461 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:40.817471 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:40.817480 | orchestrator | 2026-03-24 02:23:40.817489 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-24 02:23:40.817499 | orchestrator | Tuesday 24 March 2026 02:23:36 +0000 (0:00:01.914) 0:02:14.782 ********* 2026-03-24 02:23:40.817509 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:40.817519 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:40.817529 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:40.817538 | orchestrator | 2026-03-24 02:23:40.817561 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-24 02:23:40.817591 | orchestrator | Tuesday 24 March 2026 02:23:36 +0000 (0:00:00.297) 0:02:15.080 ********* 2026-03-24 02:23:40.817601 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:23:40.817611 | orchestrator | 2026-03-24 02:23:40.817620 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-24 02:23:40.817630 | orchestrator | Tuesday 24 March 2026 02:23:37 +0000 (0:00:01.116) 0:02:16.197 ********* 2026-03-24 02:23:40.817641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 02:23:40.817656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:23:40.817667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 02:23:40.817686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:23:40.817704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 02:23:45.862824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:23:45.862967 | orchestrator | 2026-03-24 02:23:45.862984 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-24 02:23:45.862998 | orchestrator | Tuesday 24 March 2026 02:23:40 +0000 (0:00:03.061) 0:02:19.258 ********* 2026-03-24 02:23:45.863012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 02:23:45.863082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:23:45.863126 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:45.863140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 02:23:45.863180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:23:45.863193 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:45.863205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 02:23:45.863216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:23:45.863228 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:45.863239 | orchestrator | 2026-03-24 02:23:45.863343 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-24 02:23:45.863361 | orchestrator | Tuesday 24 March 2026 02:23:41 +0000 (0:00:00.660) 0:02:19.918 ********* 2026-03-24 02:23:45.863375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-24 02:23:45.863390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-24 02:23:45.863405 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:45.863418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-24 02:23:45.863431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-24 02:23:45.863443 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:45.863455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-24 02:23:45.863467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-24 02:23:45.863480 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:23:45.863492 | orchestrator | 2026-03-24 02:23:45.863504 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-24 02:23:45.863523 | orchestrator | Tuesday 24 March 2026 02:23:42 +0000 (0:00:00.838) 0:02:20.756 ********* 2026-03-24 02:23:45.863537 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:45.863549 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:45.863561 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:45.863573 | orchestrator | 2026-03-24 02:23:45.863585 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-24 02:23:45.863598 | orchestrator | Tuesday 24 March 2026 02:23:43 +0000 (0:00:01.626) 0:02:22.383 ********* 2026-03-24 02:23:45.863609 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:23:45.863621 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:23:45.863633 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:23:45.863645 | orchestrator | 2026-03-24 02:23:45.863657 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-24 02:23:45.863678 | orchestrator | Tuesday 24 March 2026 02:23:45 +0000 (0:00:01.923) 0:02:24.306 ********* 2026-03-24 02:23:50.037892 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:23:50.038007 | orchestrator | 2026-03-24 02:23:50.038087 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-24 02:23:50.038100 | orchestrator | Tuesday 24 March 2026 02:23:46 +0000 (0:00:00.997) 0:02:25.303 ********* 2026-03-24 02:23:50.038116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 02:23:50.038157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 02:23:50.038244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 02:23:50.038394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.038435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.952769 | orchestrator | 2026-03-24 02:23:50.952926 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-24 02:23:50.952955 | orchestrator | Tuesday 24 March 2026 02:23:50 +0000 (0:00:03.257) 0:02:28.561 ********* 2026-03-24 02:23:50.952973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 02:23:50.953024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953062 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:23:50.953092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 02:23:50.953124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953171 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:23:50.953182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 02:23:50.953194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 02:23:50.953230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 02:24:01.546448 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:01.546579 | orchestrator | 2026-03-24 02:24:01.546604 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-24 02:24:01.546624 | orchestrator | Tuesday 24 March 2026 02:23:51 +0000 (0:00:00.916) 0:02:29.478 ********* 2026-03-24 02:24:01.546642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-24 02:24:01.546661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-24 02:24:01.546680 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:01.546698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-24 02:24:01.546716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-24 02:24:01.546733 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:01.546750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-24 02:24:01.546766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-24 02:24:01.546781 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:01.546795 | orchestrator | 2026-03-24 02:24:01.546811 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-24 02:24:01.546827 | orchestrator | Tuesday 24 March 2026 02:23:51 +0000 (0:00:00.817) 0:02:30.295 ********* 2026-03-24 02:24:01.546843 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:01.546860 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:01.546876 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:01.546893 | orchestrator | 2026-03-24 02:24:01.546909 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-24 02:24:01.546926 | orchestrator | Tuesday 24 March 2026 02:23:53 +0000 (0:00:01.336) 0:02:31.632 ********* 2026-03-24 02:24:01.546943 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:01.546960 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:01.546978 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:01.546995 | orchestrator | 2026-03-24 02:24:01.547011 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-24 02:24:01.547028 | orchestrator | Tuesday 24 March 2026 02:23:55 +0000 (0:00:01.942) 0:02:33.575 ********* 2026-03-24 02:24:01.547044 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:24:01.547060 | orchestrator | 2026-03-24 02:24:01.547077 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-24 02:24:01.547094 | orchestrator | Tuesday 24 March 2026 02:23:56 +0000 (0:00:01.224) 0:02:34.799 ********* 2026-03-24 02:24:01.547112 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 02:24:01.547128 | orchestrator | 2026-03-24 02:24:01.547144 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-24 02:24:01.547161 | orchestrator | Tuesday 24 March 2026 02:23:59 +0000 (0:00:03.067) 0:02:37.867 ********* 2026-03-24 02:24:01.547260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:24:01.547286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 02:24:01.547329 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:01.547354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:24:01.547385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 02:24:01.547401 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:01.547432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:24:03.686725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 02:24:03.686821 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:03.686834 | orchestrator | 2026-03-24 02:24:03.686843 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-24 02:24:03.686853 | orchestrator | Tuesday 24 March 2026 02:24:01 +0000 (0:00:02.120) 0:02:39.987 ********* 2026-03-24 02:24:03.686880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:24:03.686909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 02:24:03.686918 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:03.686943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:24:03.686968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 02:24:03.686976 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:03.686985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:24:03.687000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 02:24:12.900050 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:12.900200 | orchestrator | 2026-03-24 02:24:12.900231 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-24 02:24:12.900252 | orchestrator | Tuesday 24 March 2026 02:24:03 +0000 (0:00:02.143) 0:02:42.131 ********* 2026-03-24 02:24:12.900276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 02:24:12.900442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 02:24:12.900470 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:12.900509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 02:24:12.900530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 02:24:12.900550 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:12.900570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 02:24:12.900590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 02:24:12.900609 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:12.900630 | orchestrator | 2026-03-24 02:24:12.900651 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-24 02:24:12.900670 | orchestrator | Tuesday 24 March 2026 02:24:06 +0000 (0:00:02.560) 0:02:44.691 ********* 2026-03-24 02:24:12.900689 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:12.900736 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:12.900757 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:12.900792 | orchestrator | 2026-03-24 02:24:12.900811 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-24 02:24:12.900831 | orchestrator | Tuesday 24 March 2026 02:24:08 +0000 (0:00:02.006) 0:02:46.698 ********* 2026-03-24 02:24:12.900850 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:12.900869 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:12.900888 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:12.900906 | orchestrator | 2026-03-24 02:24:12.900924 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-24 02:24:12.900942 | orchestrator | Tuesday 24 March 2026 02:24:09 +0000 (0:00:01.378) 0:02:48.077 ********* 2026-03-24 02:24:12.900954 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:12.900969 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:12.900987 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:12.901005 | orchestrator | 2026-03-24 02:24:12.901023 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-24 02:24:12.901041 | orchestrator | Tuesday 24 March 2026 02:24:09 +0000 (0:00:00.293) 0:02:48.370 ********* 2026-03-24 02:24:12.901059 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:24:12.901071 | orchestrator | 2026-03-24 02:24:12.901083 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-24 02:24:12.901100 | orchestrator | Tuesday 24 March 2026 02:24:11 +0000 (0:00:01.295) 0:02:49.666 ********* 2026-03-24 02:24:12.901130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 02:24:12.901154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 02:24:12.901173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 02:24:12.901191 | orchestrator | 2026-03-24 02:24:12.901209 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-24 02:24:12.901228 | orchestrator | Tuesday 24 March 2026 02:24:12 +0000 (0:00:01.494) 0:02:51.161 ********* 2026-03-24 02:24:12.901275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 02:24:20.533944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 02:24:20.534190 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:20.534226 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:20.534249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 02:24:20.534268 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:20.534286 | orchestrator | 2026-03-24 02:24:20.534306 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-24 02:24:20.534384 | orchestrator | Tuesday 24 March 2026 02:24:13 +0000 (0:00:00.358) 0:02:51.520 ********* 2026-03-24 02:24:20.534409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-24 02:24:20.534430 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:20.534449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-24 02:24:20.534468 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:20.534487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-24 02:24:20.534560 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:20.534583 | orchestrator | 2026-03-24 02:24:20.534601 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-24 02:24:20.534647 | orchestrator | Tuesday 24 March 2026 02:24:13 +0000 (0:00:00.765) 0:02:52.285 ********* 2026-03-24 02:24:20.534667 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:20.534685 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:20.534719 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:20.534738 | orchestrator | 2026-03-24 02:24:20.534757 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-24 02:24:20.534777 | orchestrator | Tuesday 24 March 2026 02:24:14 +0000 (0:00:00.427) 0:02:52.713 ********* 2026-03-24 02:24:20.534797 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:20.534816 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:20.534835 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:20.534855 | orchestrator | 2026-03-24 02:24:20.534873 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-24 02:24:20.534891 | orchestrator | Tuesday 24 March 2026 02:24:15 +0000 (0:00:01.174) 0:02:53.888 ********* 2026-03-24 02:24:20.534909 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:20.534929 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:20.534948 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:20.534966 | orchestrator | 2026-03-24 02:24:20.534985 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-24 02:24:20.535003 | orchestrator | Tuesday 24 March 2026 02:24:15 +0000 (0:00:00.309) 0:02:54.197 ********* 2026-03-24 02:24:20.535021 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:24:20.535040 | orchestrator | 2026-03-24 02:24:20.535059 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-24 02:24:20.535079 | orchestrator | Tuesday 24 March 2026 02:24:17 +0000 (0:00:01.341) 0:02:55.538 ********* 2026-03-24 02:24:20.535128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 02:24:20.535162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.535184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.535221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.535242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-24 02:24:20.535275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.686610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.686726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.686744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.686779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:20.686793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.686805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-24 02:24:20.686834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.686847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.686865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 02:24:20.686889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 02:24:20.686903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:20.686915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.686934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.790729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.790828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-24 02:24:20.790841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.790851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 02:24:20.790861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.790890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.790905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.790914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.790923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.790930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.790938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:20.790956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-24 02:24:20.935170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.935267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.935281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-24 02:24:20.935294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.935307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.935318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.935421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:20.935451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.935463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 02:24:20.935477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:20.935488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:20.935498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:20.935527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-24 02:24:22.052194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.052305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.052322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 02:24:22.052450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:22.052465 | orchestrator | 2026-03-24 02:24:22.052478 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-24 02:24:22.052491 | orchestrator | Tuesday 24 March 2026 02:24:21 +0000 (0:00:03.937) 0:02:59.476 ********* 2026-03-24 02:24:22.052546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 02:24:22.052586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.052597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.052606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.052614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-24 02:24:22.052630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.052648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 02:24:22.149679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.149779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.149798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.149811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.149847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.149874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.149906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:22.149919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-24 02:24:22.149932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.149951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.149968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-24 02:24:22.149980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.150140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.211192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.211281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.211299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.211391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 02:24:22.211428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 02:24:22.211467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:22.211484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:22.211501 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:22.211519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.211548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.211565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-24 02:24:22.211579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.211597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.434003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.434243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.434300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-24 02:24:22.434320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.434407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 02:24:22.434448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:22.434459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.434471 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:22.434490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:22.434500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.434517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:22.434532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:22.434547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-24 02:24:22.434571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-24 02:24:32.492482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 02:24:32.492624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 02:24:32.492658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 02:24:32.492670 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:32.492682 | orchestrator | 2026-03-24 02:24:32.492693 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-24 02:24:32.492705 | orchestrator | Tuesday 24 March 2026 02:24:22 +0000 (0:00:01.401) 0:03:00.877 ********* 2026-03-24 02:24:32.492716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-24 02:24:32.492729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-24 02:24:32.492741 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:32.492750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-24 02:24:32.492760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-24 02:24:32.492770 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:32.492779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-24 02:24:32.492789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-24 02:24:32.492798 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:32.492817 | orchestrator | 2026-03-24 02:24:32.492827 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-24 02:24:32.492837 | orchestrator | Tuesday 24 March 2026 02:24:24 +0000 (0:00:01.810) 0:03:02.688 ********* 2026-03-24 02:24:32.492847 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:32.492857 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:32.492883 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:32.492894 | orchestrator | 2026-03-24 02:24:32.492904 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-24 02:24:32.492914 | orchestrator | Tuesday 24 March 2026 02:24:25 +0000 (0:00:01.399) 0:03:04.087 ********* 2026-03-24 02:24:32.492924 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:32.492934 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:32.492944 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:32.492954 | orchestrator | 2026-03-24 02:24:32.492963 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-24 02:24:32.492973 | orchestrator | Tuesday 24 March 2026 02:24:27 +0000 (0:00:02.026) 0:03:06.114 ********* 2026-03-24 02:24:32.492982 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:24:32.492991 | orchestrator | 2026-03-24 02:24:32.493001 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-24 02:24:32.493011 | orchestrator | Tuesday 24 March 2026 02:24:28 +0000 (0:00:01.162) 0:03:07.277 ********* 2026-03-24 02:24:32.493023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 02:24:32.493042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 02:24:32.493054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 02:24:32.493072 | orchestrator | 2026-03-24 02:24:32.493083 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-24 02:24:32.493095 | orchestrator | Tuesday 24 March 2026 02:24:32 +0000 (0:00:03.200) 0:03:10.478 ********* 2026-03-24 02:24:32.493115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 02:24:42.152083 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:42.152214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 02:24:42.152241 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:42.152279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 02:24:42.152299 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:42.152316 | orchestrator | 2026-03-24 02:24:42.152333 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-24 02:24:42.152351 | orchestrator | Tuesday 24 March 2026 02:24:32 +0000 (0:00:00.463) 0:03:10.941 ********* 2026-03-24 02:24:42.152435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-24 02:24:42.152458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-24 02:24:42.152504 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:42.152522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-24 02:24:42.152539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-24 02:24:42.152556 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:42.152572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-24 02:24:42.152590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-24 02:24:42.152607 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:42.152625 | orchestrator | 2026-03-24 02:24:42.152644 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-24 02:24:42.152662 | orchestrator | Tuesday 24 March 2026 02:24:33 +0000 (0:00:00.722) 0:03:11.664 ********* 2026-03-24 02:24:42.152680 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:42.152697 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:42.152715 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:42.152733 | orchestrator | 2026-03-24 02:24:42.152751 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-24 02:24:42.152770 | orchestrator | Tuesday 24 March 2026 02:24:34 +0000 (0:00:01.736) 0:03:13.400 ********* 2026-03-24 02:24:42.152788 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:42.152806 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:42.152845 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:42.152865 | orchestrator | 2026-03-24 02:24:42.152884 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-24 02:24:42.152901 | orchestrator | Tuesday 24 March 2026 02:24:36 +0000 (0:00:01.748) 0:03:15.149 ********* 2026-03-24 02:24:42.152919 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:24:42.152935 | orchestrator | 2026-03-24 02:24:42.152953 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-24 02:24:42.152970 | orchestrator | Tuesday 24 March 2026 02:24:38 +0000 (0:00:01.462) 0:03:16.611 ********* 2026-03-24 02:24:42.152992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 02:24:42.153035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 02:24:42.153057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:24:42.153089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:24:43.241087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:24:43.241187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:24:43.241220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 02:24:43.241258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:24:43.241269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:24:43.241281 | orchestrator | 2026-03-24 02:24:43.241293 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-24 02:24:43.241309 | orchestrator | Tuesday 24 March 2026 02:24:42 +0000 (0:00:03.978) 0:03:20.591 ********* 2026-03-24 02:24:43.241356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 02:24:43.241413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:24:43.241450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:24:43.241470 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:43.241489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 02:24:43.241518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:24:53.357815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:24:53.357947 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:53.358004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 02:24:53.358142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 02:24:53.358168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 02:24:53.358184 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:53.358196 | orchestrator | 2026-03-24 02:24:53.358208 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-24 02:24:53.358221 | orchestrator | Tuesday 24 March 2026 02:24:43 +0000 (0:00:01.091) 0:03:21.683 ********* 2026-03-24 02:24:53.358233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358307 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:24:53.358318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358378 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:24:53.358425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-24 02:24:53.358484 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:24:53.358497 | orchestrator | 2026-03-24 02:24:53.358509 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-24 02:24:53.358521 | orchestrator | Tuesday 24 March 2026 02:24:44 +0000 (0:00:00.878) 0:03:22.562 ********* 2026-03-24 02:24:53.358533 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:53.358545 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:53.358557 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:53.358569 | orchestrator | 2026-03-24 02:24:53.358581 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-24 02:24:53.358594 | orchestrator | Tuesday 24 March 2026 02:24:45 +0000 (0:00:01.290) 0:03:23.852 ********* 2026-03-24 02:24:53.358606 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:24:53.358618 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:24:53.358630 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:24:53.358642 | orchestrator | 2026-03-24 02:24:53.358654 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-24 02:24:53.358667 | orchestrator | Tuesday 24 March 2026 02:24:47 +0000 (0:00:01.971) 0:03:25.823 ********* 2026-03-24 02:24:53.358679 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:24:53.358691 | orchestrator | 2026-03-24 02:24:53.358704 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-24 02:24:53.358716 | orchestrator | Tuesday 24 March 2026 02:24:48 +0000 (0:00:01.505) 0:03:27.328 ********* 2026-03-24 02:24:53.358727 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-24 02:24:53.358739 | orchestrator | 2026-03-24 02:24:53.358750 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-24 02:24:53.358760 | orchestrator | Tuesday 24 March 2026 02:24:49 +0000 (0:00:00.782) 0:03:28.111 ********* 2026-03-24 02:24:53.358773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-24 02:24:53.358802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-24 02:25:04.164769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-24 02:25:04.164874 | orchestrator | 2026-03-24 02:25:04.164887 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-24 02:25:04.164897 | orchestrator | Tuesday 24 March 2026 02:24:53 +0000 (0:00:03.683) 0:03:31.794 ********* 2026-03-24 02:25:04.164905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.164912 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:04.164934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.164941 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:04.164948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.164955 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:04.164962 | orchestrator | 2026-03-24 02:25:04.164969 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-24 02:25:04.164976 | orchestrator | Tuesday 24 March 2026 02:24:54 +0000 (0:00:01.247) 0:03:33.042 ********* 2026-03-24 02:25:04.164985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 02:25:04.164994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 02:25:04.165003 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:04.165025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 02:25:04.165032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 02:25:04.165039 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:04.165046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 02:25:04.165053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 02:25:04.165072 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:04.165079 | orchestrator | 2026-03-24 02:25:04.165086 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-24 02:25:04.165092 | orchestrator | Tuesday 24 March 2026 02:24:56 +0000 (0:00:01.426) 0:03:34.469 ********* 2026-03-24 02:25:04.165099 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:25:04.165106 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:25:04.165112 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:25:04.165119 | orchestrator | 2026-03-24 02:25:04.165125 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-24 02:25:04.165132 | orchestrator | Tuesday 24 March 2026 02:24:58 +0000 (0:00:02.353) 0:03:36.822 ********* 2026-03-24 02:25:04.165139 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:25:04.165145 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:25:04.165152 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:25:04.165158 | orchestrator | 2026-03-24 02:25:04.165165 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-24 02:25:04.165171 | orchestrator | Tuesday 24 March 2026 02:25:01 +0000 (0:00:02.717) 0:03:39.540 ********* 2026-03-24 02:25:04.165179 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-24 02:25:04.165187 | orchestrator | 2026-03-24 02:25:04.165194 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-24 02:25:04.165200 | orchestrator | Tuesday 24 March 2026 02:25:02 +0000 (0:00:00.982) 0:03:40.523 ********* 2026-03-24 02:25:04.165212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.165219 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:04.165226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.165233 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:04.165245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.165252 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:04.165259 | orchestrator | 2026-03-24 02:25:04.165265 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-24 02:25:04.165272 | orchestrator | Tuesday 24 March 2026 02:25:02 +0000 (0:00:00.927) 0:03:41.450 ********* 2026-03-24 02:25:04.165279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.165286 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:04.165293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:04.165305 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:25.712898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 02:25:25.713042 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:25.713070 | orchestrator | 2026-03-24 02:25:25.713092 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-24 02:25:25.713113 | orchestrator | Tuesday 24 March 2026 02:25:04 +0000 (0:00:01.150) 0:03:42.600 ********* 2026-03-24 02:25:25.713132 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:25.713151 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:25.713169 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:25.713187 | orchestrator | 2026-03-24 02:25:25.713207 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-24 02:25:25.713225 | orchestrator | Tuesday 24 March 2026 02:25:05 +0000 (0:00:01.366) 0:03:43.967 ********* 2026-03-24 02:25:25.713243 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:25:25.713264 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:25:25.713281 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:25:25.713300 | orchestrator | 2026-03-24 02:25:25.713318 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-24 02:25:25.713338 | orchestrator | Tuesday 24 March 2026 02:25:08 +0000 (0:00:02.523) 0:03:46.490 ********* 2026-03-24 02:25:25.713357 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:25:25.713376 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:25:25.713426 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:25:25.713472 | orchestrator | 2026-03-24 02:25:25.713494 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-24 02:25:25.713544 | orchestrator | Tuesday 24 March 2026 02:25:10 +0000 (0:00:02.567) 0:03:49.058 ********* 2026-03-24 02:25:25.713584 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-24 02:25:25.713606 | orchestrator | 2026-03-24 02:25:25.713625 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-24 02:25:25.713645 | orchestrator | Tuesday 24 March 2026 02:25:11 +0000 (0:00:01.204) 0:03:50.263 ********* 2026-03-24 02:25:25.713662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 02:25:25.713676 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:25.713689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 02:25:25.713702 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:25.713714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 02:25:25.713728 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:25.713740 | orchestrator | 2026-03-24 02:25:25.713754 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-24 02:25:25.713768 | orchestrator | Tuesday 24 March 2026 02:25:12 +0000 (0:00:01.184) 0:03:51.448 ********* 2026-03-24 02:25:25.713802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 02:25:25.713814 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:25.713826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 02:25:25.713847 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:25.713858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 02:25:25.713869 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:25.713880 | orchestrator | 2026-03-24 02:25:25.713891 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-24 02:25:25.713908 | orchestrator | Tuesday 24 March 2026 02:25:14 +0000 (0:00:01.206) 0:03:52.654 ********* 2026-03-24 02:25:25.713919 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:25.713930 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:25.713941 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:25.713951 | orchestrator | 2026-03-24 02:25:25.713962 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-24 02:25:25.713973 | orchestrator | Tuesday 24 March 2026 02:25:15 +0000 (0:00:01.645) 0:03:54.299 ********* 2026-03-24 02:25:25.713984 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:25:25.713995 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:25:25.714006 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:25:25.714080 | orchestrator | 2026-03-24 02:25:25.714093 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-24 02:25:25.714104 | orchestrator | Tuesday 24 March 2026 02:25:18 +0000 (0:00:02.253) 0:03:56.553 ********* 2026-03-24 02:25:25.714115 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:25:25.714126 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:25:25.714137 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:25:25.714147 | orchestrator | 2026-03-24 02:25:25.714158 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-24 02:25:25.714169 | orchestrator | Tuesday 24 March 2026 02:25:21 +0000 (0:00:03.022) 0:03:59.576 ********* 2026-03-24 02:25:25.714181 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:25:25.714192 | orchestrator | 2026-03-24 02:25:25.714202 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-24 02:25:25.714214 | orchestrator | Tuesday 24 March 2026 02:25:22 +0000 (0:00:01.518) 0:04:01.094 ********* 2026-03-24 02:25:25.714227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 02:25:25.714239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 02:25:25.714270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.376143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.376286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:25:26.376313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 02:25:26.376334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 02:25:26.376354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.376402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.376444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:25:26.376510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 02:25:26.376531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 02:25:26.376551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.376571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.376645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:25:26.376668 | orchestrator | 2026-03-24 02:25:26.376691 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-24 02:25:26.376713 | orchestrator | Tuesday 24 March 2026 02:25:25 +0000 (0:00:03.191) 0:04:04.286 ********* 2026-03-24 02:25:26.376749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 02:25:26.509604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 02:25:26.509711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.509729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.509743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:25:26.509776 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:26.509792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 02:25:26.509805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 02:25:26.509841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.509855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.509867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:25:26.509878 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:26.509889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 02:25:26.509909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 02:25:26.509921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 02:25:26.509947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 02:25:37.757774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 02:25:37.757862 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:37.757871 | orchestrator | 2026-03-24 02:25:37.757878 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-24 02:25:37.757885 | orchestrator | Tuesday 24 March 2026 02:25:26 +0000 (0:00:00.669) 0:04:04.955 ********* 2026-03-24 02:25:37.757891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 02:25:37.757899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 02:25:37.757928 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:37.757938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 02:25:37.757948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 02:25:37.757957 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:37.757967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 02:25:37.757976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 02:25:37.757984 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:37.757993 | orchestrator | 2026-03-24 02:25:37.758003 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-24 02:25:37.758061 | orchestrator | Tuesday 24 March 2026 02:25:27 +0000 (0:00:00.844) 0:04:05.799 ********* 2026-03-24 02:25:37.758073 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:25:37.758082 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:25:37.758091 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:25:37.758098 | orchestrator | 2026-03-24 02:25:37.758108 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-24 02:25:37.758117 | orchestrator | Tuesday 24 March 2026 02:25:29 +0000 (0:00:01.770) 0:04:07.570 ********* 2026-03-24 02:25:37.758126 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:25:37.758135 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:25:37.758144 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:25:37.758154 | orchestrator | 2026-03-24 02:25:37.758163 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-24 02:25:37.758172 | orchestrator | Tuesday 24 March 2026 02:25:31 +0000 (0:00:02.127) 0:04:09.697 ********* 2026-03-24 02:25:37.758181 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:25:37.758191 | orchestrator | 2026-03-24 02:25:37.758200 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-24 02:25:37.758210 | orchestrator | Tuesday 24 March 2026 02:25:32 +0000 (0:00:01.318) 0:04:11.015 ********* 2026-03-24 02:25:37.758234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:25:37.758265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:25:37.758287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:25:37.758299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:25:37.758311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:25:37.758334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:25:39.623040 | orchestrator | 2026-03-24 02:25:39.623140 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-24 02:25:39.623156 | orchestrator | Tuesday 24 March 2026 02:25:37 +0000 (0:00:05.178) 0:04:16.194 ********* 2026-03-24 02:25:39.623171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:25:39.623188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:25:39.623201 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:39.623214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:25:39.623244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:25:39.623310 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:39.623332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:25:39.623351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:25:39.623369 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:39.623387 | orchestrator | 2026-03-24 02:25:39.623405 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-24 02:25:39.623463 | orchestrator | Tuesday 24 March 2026 02:25:38 +0000 (0:00:00.954) 0:04:17.149 ********* 2026-03-24 02:25:39.623515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-24 02:25:39.623537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-24 02:25:39.623561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-24 02:25:39.623582 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:39.623602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-24 02:25:39.623644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-24 02:25:39.623665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-24 02:25:39.623685 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:39.623705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-24 02:25:39.623725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-24 02:25:39.623766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-24 02:25:45.375542 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:45.375654 | orchestrator | 2026-03-24 02:25:45.375668 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-24 02:25:45.375680 | orchestrator | Tuesday 24 March 2026 02:25:39 +0000 (0:00:00.914) 0:04:18.063 ********* 2026-03-24 02:25:45.375690 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:45.375699 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:45.375708 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:45.375717 | orchestrator | 2026-03-24 02:25:45.375728 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-24 02:25:45.375737 | orchestrator | Tuesday 24 March 2026 02:25:40 +0000 (0:00:00.407) 0:04:18.471 ********* 2026-03-24 02:25:45.375746 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:45.375755 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:45.375765 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:45.375774 | orchestrator | 2026-03-24 02:25:45.375783 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-24 02:25:45.375792 | orchestrator | Tuesday 24 March 2026 02:25:41 +0000 (0:00:01.390) 0:04:19.861 ********* 2026-03-24 02:25:45.375802 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:25:45.375812 | orchestrator | 2026-03-24 02:25:45.375821 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-24 02:25:45.375830 | orchestrator | Tuesday 24 March 2026 02:25:43 +0000 (0:00:01.601) 0:04:21.463 ********* 2026-03-24 02:25:45.375842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-24 02:25:45.375857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 02:25:45.375892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:45.375915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:45.375926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 02:25:45.375952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-24 02:25:45.375964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-24 02:25:45.375973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 02:25:45.375989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 02:25:45.376004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:45.376013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:45.376023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:45.376039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.021962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.022138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.022180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-24 02:25:47.022210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-24 02:25:47.022222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.022233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.022263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.022274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-24 02:25:47.022292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-24 02:25:47.022307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.022321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.022338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.022366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-24 02:25:47.688122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-24 02:25:47.688230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.688265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.688279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.688292 | orchestrator | 2026-03-24 02:25:47.688306 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-24 02:25:47.688319 | orchestrator | Tuesday 24 March 2026 02:25:47 +0000 (0:00:04.148) 0:04:25.612 ********* 2026-03-24 02:25:47.688332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-24 02:25:47.688344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 02:25:47.688397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.688410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.688423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.688443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-24 02:25:47.688458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-24 02:25:47.688470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.688590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.855587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-24 02:25:47.855704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.855739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 02:25:47.855752 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:47.855765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.855777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.855790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.855851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-24 02:25:47.855868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-24 02:25:47.855886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.855898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:47.855909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-24 02:25:47.855928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 02:25:47.855939 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:47.855960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 02:25:49.321903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:49.321994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:49.322076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 02:25:49.322092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-24 02:25:49.322106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-24 02:25:49.322135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:49.322162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 02:25:49.322172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 02:25:49.322182 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:49.322192 | orchestrator | 2026-03-24 02:25:49.322202 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-24 02:25:49.322212 | orchestrator | Tuesday 24 March 2026 02:25:48 +0000 (0:00:00.845) 0:04:26.457 ********* 2026-03-24 02:25:49.322221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-24 02:25:49.322238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-24 02:25:49.322250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-24 02:25:49.322262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-24 02:25:49.322272 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:49.322281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-24 02:25:49.322299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-24 02:25:49.322308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-24 02:25:49.322318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-24 02:25:49.322327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-24 02:25:49.322336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-24 02:25:49.322345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-24 02:25:49.322360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-24 02:25:55.974547 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:55.974653 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:55.974667 | orchestrator | 2026-03-24 02:25:55.974679 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-24 02:25:55.974690 | orchestrator | Tuesday 24 March 2026 02:25:49 +0000 (0:00:01.301) 0:04:27.759 ********* 2026-03-24 02:25:55.974700 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:55.974710 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:55.974733 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:55.974743 | orchestrator | 2026-03-24 02:25:55.974763 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-24 02:25:55.974773 | orchestrator | Tuesday 24 March 2026 02:25:49 +0000 (0:00:00.425) 0:04:28.184 ********* 2026-03-24 02:25:55.974783 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:55.974792 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:55.974802 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:55.974812 | orchestrator | 2026-03-24 02:25:55.974821 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-24 02:25:55.974831 | orchestrator | Tuesday 24 March 2026 02:25:50 +0000 (0:00:01.230) 0:04:29.414 ********* 2026-03-24 02:25:55.974840 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:25:55.974850 | orchestrator | 2026-03-24 02:25:55.974860 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-24 02:25:55.974869 | orchestrator | Tuesday 24 March 2026 02:25:52 +0000 (0:00:01.655) 0:04:31.070 ********* 2026-03-24 02:25:55.974883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:25:55.974923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:25:55.974973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:25:55.974986 | orchestrator | 2026-03-24 02:25:55.975013 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-24 02:25:55.975029 | orchestrator | Tuesday 24 March 2026 02:25:54 +0000 (0:00:02.102) 0:04:33.172 ********* 2026-03-24 02:25:55.975048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 02:25:55.975075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 02:25:55.975105 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:55.975123 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:55.975141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 02:25:55.975169 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:55.975188 | orchestrator | 2026-03-24 02:25:55.975205 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-24 02:25:55.975220 | orchestrator | Tuesday 24 March 2026 02:25:55 +0000 (0:00:00.401) 0:04:33.574 ********* 2026-03-24 02:25:55.975237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-24 02:25:55.975255 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:25:55.975271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-24 02:25:55.975288 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:25:55.975304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-24 02:25:55.975320 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:25:55.975336 | orchestrator | 2026-03-24 02:25:55.975352 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-24 02:25:55.975380 | orchestrator | Tuesday 24 March 2026 02:25:55 +0000 (0:00:00.840) 0:04:34.414 ********* 2026-03-24 02:26:05.788095 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:05.788204 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:05.788218 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:05.788231 | orchestrator | 2026-03-24 02:26:05.788243 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-24 02:26:05.788255 | orchestrator | Tuesday 24 March 2026 02:25:56 +0000 (0:00:00.439) 0:04:34.853 ********* 2026-03-24 02:26:05.788266 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:05.788277 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:05.788288 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:05.788322 | orchestrator | 2026-03-24 02:26:05.788334 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-24 02:26:05.788345 | orchestrator | Tuesday 24 March 2026 02:25:57 +0000 (0:00:01.212) 0:04:36.066 ********* 2026-03-24 02:26:05.788356 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:26:05.788366 | orchestrator | 2026-03-24 02:26:05.788378 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-24 02:26:05.788389 | orchestrator | Tuesday 24 March 2026 02:25:59 +0000 (0:00:01.454) 0:04:37.520 ********* 2026-03-24 02:26:05.788419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 02:26:05.788437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 02:26:05.788449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 02:26:05.788480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 02:26:05.788508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 02:26:05.788637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 02:26:05.788652 | orchestrator | 2026-03-24 02:26:05.788664 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-24 02:26:05.788678 | orchestrator | Tuesday 24 March 2026 02:26:05 +0000 (0:00:06.081) 0:04:43.602 ********* 2026-03-24 02:26:05.788691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 02:26:05.788715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 02:26:11.250400 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:11.250575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 02:26:11.250600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 02:26:11.251500 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:11.251597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 02:26:11.251622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 02:26:11.251667 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:11.251680 | orchestrator | 2026-03-24 02:26:11.251693 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-24 02:26:11.251706 | orchestrator | Tuesday 24 March 2026 02:26:05 +0000 (0:00:00.634) 0:04:44.236 ********* 2026-03-24 02:26:11.251740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251839 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:11.251858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251926 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:11.251945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-24 02:26:11.251983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-24 02:26:11.252002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-24 02:26:11.252023 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:11.252035 | orchestrator | 2026-03-24 02:26:11.252048 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-24 02:26:11.252081 | orchestrator | Tuesday 24 March 2026 02:26:06 +0000 (0:00:00.912) 0:04:45.148 ********* 2026-03-24 02:26:11.252107 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:26:11.252129 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:26:11.252146 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:26:11.252164 | orchestrator | 2026-03-24 02:26:11.252181 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-24 02:26:11.252199 | orchestrator | Tuesday 24 March 2026 02:26:08 +0000 (0:00:01.307) 0:04:46.455 ********* 2026-03-24 02:26:11.252215 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:26:11.252230 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:26:11.252247 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:26:11.252266 | orchestrator | 2026-03-24 02:26:11.252283 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-24 02:26:11.252303 | orchestrator | Tuesday 24 March 2026 02:26:10 +0000 (0:00:02.096) 0:04:48.552 ********* 2026-03-24 02:26:11.252322 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:11.252341 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:11.252361 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:11.252373 | orchestrator | 2026-03-24 02:26:11.252384 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-24 02:26:11.252395 | orchestrator | Tuesday 24 March 2026 02:26:10 +0000 (0:00:00.553) 0:04:49.106 ********* 2026-03-24 02:26:11.252406 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:11.252416 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:11.252427 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:11.252438 | orchestrator | 2026-03-24 02:26:11.252449 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-24 02:26:11.252460 | orchestrator | Tuesday 24 March 2026 02:26:10 +0000 (0:00:00.284) 0:04:49.391 ********* 2026-03-24 02:26:11.252471 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:11.252494 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.170937 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.171073 | orchestrator | 2026-03-24 02:26:58.171096 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-24 02:26:58.171114 | orchestrator | Tuesday 24 March 2026 02:26:11 +0000 (0:00:00.309) 0:04:49.700 ********* 2026-03-24 02:26:58.171131 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.171146 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.171161 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.171175 | orchestrator | 2026-03-24 02:26:58.171192 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-24 02:26:58.171206 | orchestrator | Tuesday 24 March 2026 02:26:11 +0000 (0:00:00.328) 0:04:50.028 ********* 2026-03-24 02:26:58.171221 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.171235 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.171250 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.171265 | orchestrator | 2026-03-24 02:26:58.171280 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-24 02:26:58.171295 | orchestrator | Tuesday 24 March 2026 02:26:12 +0000 (0:00:00.697) 0:04:50.726 ********* 2026-03-24 02:26:58.171312 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.171329 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.171366 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.171383 | orchestrator | 2026-03-24 02:26:58.171400 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-24 02:26:58.171416 | orchestrator | Tuesday 24 March 2026 02:26:12 +0000 (0:00:00.549) 0:04:51.275 ********* 2026-03-24 02:26:58.171431 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.171446 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.171460 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.171476 | orchestrator | 2026-03-24 02:26:58.171492 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-24 02:26:58.171511 | orchestrator | Tuesday 24 March 2026 02:26:13 +0000 (0:00:00.681) 0:04:51.956 ********* 2026-03-24 02:26:58.171556 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.171571 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.171585 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.171630 | orchestrator | 2026-03-24 02:26:58.171646 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-24 02:26:58.171660 | orchestrator | Tuesday 24 March 2026 02:26:14 +0000 (0:00:00.711) 0:04:52.668 ********* 2026-03-24 02:26:58.171673 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.171686 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.171700 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.171714 | orchestrator | 2026-03-24 02:26:58.171728 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-24 02:26:58.171743 | orchestrator | Tuesday 24 March 2026 02:26:15 +0000 (0:00:00.955) 0:04:53.624 ********* 2026-03-24 02:26:58.171757 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.171772 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.171785 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.171800 | orchestrator | 2026-03-24 02:26:58.171815 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-24 02:26:58.171893 | orchestrator | Tuesday 24 March 2026 02:26:16 +0000 (0:00:00.880) 0:04:54.504 ********* 2026-03-24 02:26:58.171909 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.171923 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.171936 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.171951 | orchestrator | 2026-03-24 02:26:58.171965 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-24 02:26:58.171981 | orchestrator | Tuesday 24 March 2026 02:26:17 +0000 (0:00:00.958) 0:04:55.463 ********* 2026-03-24 02:26:58.171995 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:26:58.172010 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:26:58.172025 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:26:58.172039 | orchestrator | 2026-03-24 02:26:58.172054 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-24 02:26:58.172076 | orchestrator | Tuesday 24 March 2026 02:26:26 +0000 (0:00:09.743) 0:05:05.207 ********* 2026-03-24 02:26:58.172092 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.172106 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.172120 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.172133 | orchestrator | 2026-03-24 02:26:58.172147 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-24 02:26:58.172159 | orchestrator | Tuesday 24 March 2026 02:26:27 +0000 (0:00:01.095) 0:05:06.303 ********* 2026-03-24 02:26:58.172173 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:26:58.172187 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:26:58.172202 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:26:58.172216 | orchestrator | 2026-03-24 02:26:58.172231 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-24 02:26:58.172246 | orchestrator | Tuesday 24 March 2026 02:26:43 +0000 (0:00:15.302) 0:05:21.605 ********* 2026-03-24 02:26:58.172260 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.172271 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.172279 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.172288 | orchestrator | 2026-03-24 02:26:58.172297 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-24 02:26:58.172305 | orchestrator | Tuesday 24 March 2026 02:26:43 +0000 (0:00:00.700) 0:05:22.305 ********* 2026-03-24 02:26:58.172314 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:26:58.172323 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:26:58.172331 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:26:58.172340 | orchestrator | 2026-03-24 02:26:58.172348 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-24 02:26:58.172359 | orchestrator | Tuesday 24 March 2026 02:26:53 +0000 (0:00:09.307) 0:05:31.612 ********* 2026-03-24 02:26:58.172373 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.172398 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.172432 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.172448 | orchestrator | 2026-03-24 02:26:58.172462 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-24 02:26:58.172478 | orchestrator | Tuesday 24 March 2026 02:26:53 +0000 (0:00:00.639) 0:05:32.252 ********* 2026-03-24 02:26:58.172492 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.172507 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.172520 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.172529 | orchestrator | 2026-03-24 02:26:58.172559 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-24 02:26:58.172569 | orchestrator | Tuesday 24 March 2026 02:26:54 +0000 (0:00:00.347) 0:05:32.600 ********* 2026-03-24 02:26:58.172578 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.172586 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.172595 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.172632 | orchestrator | 2026-03-24 02:26:58.172641 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-24 02:26:58.172649 | orchestrator | Tuesday 24 March 2026 02:26:54 +0000 (0:00:00.330) 0:05:32.930 ********* 2026-03-24 02:26:58.172658 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.172667 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.172676 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.172684 | orchestrator | 2026-03-24 02:26:58.172693 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-24 02:26:58.172702 | orchestrator | Tuesday 24 March 2026 02:26:54 +0000 (0:00:00.321) 0:05:33.252 ********* 2026-03-24 02:26:58.172710 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.172719 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.172728 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.172736 | orchestrator | 2026-03-24 02:26:58.172754 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-24 02:26:58.172763 | orchestrator | Tuesday 24 March 2026 02:26:55 +0000 (0:00:00.597) 0:05:33.849 ********* 2026-03-24 02:26:58.172772 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:26:58.172780 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:26:58.172789 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:26:58.172797 | orchestrator | 2026-03-24 02:26:58.172806 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-24 02:26:58.172815 | orchestrator | Tuesday 24 March 2026 02:26:55 +0000 (0:00:00.338) 0:05:34.187 ********* 2026-03-24 02:26:58.172823 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.172832 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.172840 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.172849 | orchestrator | 2026-03-24 02:26:58.172857 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-24 02:26:58.172866 | orchestrator | Tuesday 24 March 2026 02:26:56 +0000 (0:00:00.872) 0:05:35.060 ********* 2026-03-24 02:26:58.172875 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:26:58.172883 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:26:58.172892 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:26:58.172900 | orchestrator | 2026-03-24 02:26:58.172909 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:26:58.172919 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-24 02:26:58.172929 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-24 02:26:58.172938 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-24 02:26:58.172946 | orchestrator | 2026-03-24 02:26:58.172955 | orchestrator | 2026-03-24 02:26:58.172964 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:26:58.172980 | orchestrator | Tuesday 24 March 2026 02:26:57 +0000 (0:00:00.803) 0:05:35.864 ********* 2026-03-24 02:26:58.172989 | orchestrator | =============================================================================== 2026-03-24 02:26:58.172997 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.30s 2026-03-24 02:26:58.173006 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.74s 2026-03-24 02:26:58.173015 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.31s 2026-03-24 02:26:58.173023 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.08s 2026-03-24 02:26:58.173031 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.18s 2026-03-24 02:26:58.173040 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.15s 2026-03-24 02:26:58.173049 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.15s 2026-03-24 02:26:58.173057 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.98s 2026-03-24 02:26:58.173066 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.94s 2026-03-24 02:26:58.173074 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.74s 2026-03-24 02:26:58.173083 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.68s 2026-03-24 02:26:58.173091 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.36s 2026-03-24 02:26:58.173102 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.32s 2026-03-24 02:26:58.173116 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.26s 2026-03-24 02:26:58.173130 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.20s 2026-03-24 02:26:58.173143 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.20s 2026-03-24 02:26:58.173157 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.19s 2026-03-24 02:26:58.173171 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.19s 2026-03-24 02:26:58.173185 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.16s 2026-03-24 02:26:58.173199 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.15s 2026-03-24 02:27:00.350727 | orchestrator | 2026-03-24 02:27:00 | INFO  | Task 73ed9ca0-13f9-4841-9dcf-2505b3e5e053 (opensearch) was prepared for execution. 2026-03-24 02:27:00.350798 | orchestrator | 2026-03-24 02:27:00 | INFO  | It takes a moment until task 73ed9ca0-13f9-4841-9dcf-2505b3e5e053 (opensearch) has been started and output is visible here. 2026-03-24 02:27:09.366688 | orchestrator | 2026-03-24 02:27:09.366835 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:27:09.366861 | orchestrator | 2026-03-24 02:27:09.366880 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:27:09.366898 | orchestrator | Tuesday 24 March 2026 02:27:03 +0000 (0:00:00.180) 0:00:00.180 ********* 2026-03-24 02:27:09.366915 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:27:09.366932 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:27:09.366948 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:27:09.366965 | orchestrator | 2026-03-24 02:27:09.366982 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:27:09.366999 | orchestrator | Tuesday 24 March 2026 02:27:04 +0000 (0:00:00.214) 0:00:00.394 ********* 2026-03-24 02:27:09.367018 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-24 02:27:09.367058 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-24 02:27:09.367078 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-24 02:27:09.367097 | orchestrator | 2026-03-24 02:27:09.367115 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-24 02:27:09.367133 | orchestrator | 2026-03-24 02:27:09.367151 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 02:27:09.367201 | orchestrator | Tuesday 24 March 2026 02:27:04 +0000 (0:00:00.275) 0:00:00.670 ********* 2026-03-24 02:27:09.367223 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:27:09.367243 | orchestrator | 2026-03-24 02:27:09.367262 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-24 02:27:09.367275 | orchestrator | Tuesday 24 March 2026 02:27:04 +0000 (0:00:00.355) 0:00:01.026 ********* 2026-03-24 02:27:09.367288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 02:27:09.367301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 02:27:09.367314 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 02:27:09.367327 | orchestrator | 2026-03-24 02:27:09.367341 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-24 02:27:09.367354 | orchestrator | Tuesday 24 March 2026 02:27:05 +0000 (0:00:00.616) 0:00:01.642 ********* 2026-03-24 02:27:09.367370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:09.367389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:09.367426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:09.367460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:09.367496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:09.367520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:09.367540 | orchestrator | 2026-03-24 02:27:09.367559 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 02:27:09.367578 | orchestrator | Tuesday 24 March 2026 02:27:06 +0000 (0:00:01.362) 0:00:03.004 ********* 2026-03-24 02:27:09.367597 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:27:09.367699 | orchestrator | 2026-03-24 02:27:09.367722 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-24 02:27:09.367740 | orchestrator | Tuesday 24 March 2026 02:27:07 +0000 (0:00:00.359) 0:00:03.364 ********* 2026-03-24 02:27:09.367777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:10.022095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:10.022199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:10.022218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:10.022233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:10.022292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:10.022308 | orchestrator | 2026-03-24 02:27:10.022322 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-24 02:27:10.022334 | orchestrator | Tuesday 24 March 2026 02:27:09 +0000 (0:00:02.256) 0:00:05.620 ********* 2026-03-24 02:27:10.022347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:27:10.022359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:27:10.022371 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:27:10.022383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:27:10.022446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:27:10.899251 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:27:10.899356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:27:10.899376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:27:10.899389 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:27:10.899401 | orchestrator | 2026-03-24 02:27:10.899414 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-24 02:27:10.899427 | orchestrator | Tuesday 24 March 2026 02:27:10 +0000 (0:00:00.655) 0:00:06.276 ********* 2026-03-24 02:27:10.899440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:27:10.899491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:27:10.899521 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:27:10.899534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:27:10.899546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:27:10.899558 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:27:10.899570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-24 02:27:10.899594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-24 02:27:10.899608 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:27:10.899704 | orchestrator | 2026-03-24 02:27:10.899747 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-24 02:27:10.899771 | orchestrator | Tuesday 24 March 2026 02:27:10 +0000 (0:00:00.868) 0:00:07.144 ********* 2026-03-24 02:27:18.725485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:18.725618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:18.725682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:18.725745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:18.725781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:18.725794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:27:18.725806 | orchestrator | 2026-03-24 02:27:18.725818 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-24 02:27:18.725839 | orchestrator | Tuesday 24 March 2026 02:27:13 +0000 (0:00:02.328) 0:00:09.473 ********* 2026-03-24 02:27:18.725850 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:27:18.725861 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:27:18.725871 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:27:18.725881 | orchestrator | 2026-03-24 02:27:18.725892 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-24 02:27:18.725901 | orchestrator | Tuesday 24 March 2026 02:27:15 +0000 (0:00:02.180) 0:00:11.654 ********* 2026-03-24 02:27:18.725911 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:27:18.725921 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:27:18.725931 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:27:18.725941 | orchestrator | 2026-03-24 02:27:18.725950 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-24 02:27:18.725960 | orchestrator | Tuesday 24 March 2026 02:27:17 +0000 (0:00:01.696) 0:00:13.351 ********* 2026-03-24 02:27:18.725971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:18.725987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:27:18.726008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-24 02:30:08.528731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:30:08.528974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:30:08.529008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-24 02:30:08.529018 | orchestrator | 2026-03-24 02:30:08.529028 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 02:30:08.529037 | orchestrator | Tuesday 24 March 2026 02:27:18 +0000 (0:00:01.622) 0:00:14.973 ********* 2026-03-24 02:30:08.529045 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:30:08.529053 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:30:08.529060 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:30:08.529067 | orchestrator | 2026-03-24 02:30:08.529075 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-24 02:30:08.529083 | orchestrator | Tuesday 24 March 2026 02:27:18 +0000 (0:00:00.265) 0:00:15.239 ********* 2026-03-24 02:30:08.529090 | orchestrator | 2026-03-24 02:30:08.529097 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-24 02:30:08.529105 | orchestrator | Tuesday 24 March 2026 02:27:19 +0000 (0:00:00.057) 0:00:15.296 ********* 2026-03-24 02:30:08.529112 | orchestrator | 2026-03-24 02:30:08.529119 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-24 02:30:08.529126 | orchestrator | Tuesday 24 March 2026 02:27:19 +0000 (0:00:00.062) 0:00:15.359 ********* 2026-03-24 02:30:08.529141 | orchestrator | 2026-03-24 02:30:08.529148 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-24 02:30:08.529170 | orchestrator | Tuesday 24 March 2026 02:27:19 +0000 (0:00:00.061) 0:00:15.420 ********* 2026-03-24 02:30:08.529178 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:30:08.529185 | orchestrator | 2026-03-24 02:30:08.529193 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-24 02:30:08.529200 | orchestrator | Tuesday 24 March 2026 02:27:19 +0000 (0:00:00.196) 0:00:15.617 ********* 2026-03-24 02:30:08.529207 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:30:08.529214 | orchestrator | 2026-03-24 02:30:08.529221 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-24 02:30:08.529228 | orchestrator | Tuesday 24 March 2026 02:27:19 +0000 (0:00:00.533) 0:00:16.151 ********* 2026-03-24 02:30:08.529235 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:08.529243 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:30:08.529250 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:30:08.529257 | orchestrator | 2026-03-24 02:30:08.529264 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-24 02:30:08.529271 | orchestrator | Tuesday 24 March 2026 02:28:29 +0000 (0:01:09.876) 0:01:26.028 ********* 2026-03-24 02:30:08.529278 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:08.529285 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:30:08.529292 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:30:08.529299 | orchestrator | 2026-03-24 02:30:08.529306 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 02:30:08.529313 | orchestrator | Tuesday 24 March 2026 02:29:57 +0000 (0:01:27.520) 0:02:53.548 ********* 2026-03-24 02:30:08.529321 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:30:08.529328 | orchestrator | 2026-03-24 02:30:08.529336 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-24 02:30:08.529343 | orchestrator | Tuesday 24 March 2026 02:29:57 +0000 (0:00:00.488) 0:02:54.036 ********* 2026-03-24 02:30:08.529350 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:30:08.529357 | orchestrator | 2026-03-24 02:30:08.529364 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-24 02:30:08.529371 | orchestrator | Tuesday 24 March 2026 02:30:00 +0000 (0:00:02.841) 0:02:56.878 ********* 2026-03-24 02:30:08.529378 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:30:08.529385 | orchestrator | 2026-03-24 02:30:08.529393 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-24 02:30:08.529400 | orchestrator | Tuesday 24 March 2026 02:30:02 +0000 (0:00:02.211) 0:02:59.090 ********* 2026-03-24 02:30:08.529407 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:08.529414 | orchestrator | 2026-03-24 02:30:08.529421 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-24 02:30:08.529428 | orchestrator | Tuesday 24 March 2026 02:30:05 +0000 (0:00:03.161) 0:03:02.251 ********* 2026-03-24 02:30:08.529435 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:08.529442 | orchestrator | 2026-03-24 02:30:08.529450 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:30:08.529458 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 02:30:08.529466 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 02:30:08.529473 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 02:30:08.529481 | orchestrator | 2026-03-24 02:30:08.529488 | orchestrator | 2026-03-24 02:30:08.529495 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:30:08.529506 | orchestrator | Tuesday 24 March 2026 02:30:08 +0000 (0:00:02.510) 0:03:04.761 ********* 2026-03-24 02:30:08.529519 | orchestrator | =============================================================================== 2026-03-24 02:30:08.529526 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 87.52s 2026-03-24 02:30:08.529533 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.88s 2026-03-24 02:30:08.529540 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.16s 2026-03-24 02:30:08.529547 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.84s 2026-03-24 02:30:08.529554 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.51s 2026-03-24 02:30:08.529561 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.33s 2026-03-24 02:30:08.529568 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.26s 2026-03-24 02:30:08.529575 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2026-03-24 02:30:08.529582 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.18s 2026-03-24 02:30:08.529589 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.70s 2026-03-24 02:30:08.529596 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.62s 2026-03-24 02:30:08.529603 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.36s 2026-03-24 02:30:08.529610 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.87s 2026-03-24 02:30:08.529617 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.66s 2026-03-24 02:30:08.529625 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.62s 2026-03-24 02:30:08.529632 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.53s 2026-03-24 02:30:08.529643 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-03-24 02:30:08.824499 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.36s 2026-03-24 02:30:08.824575 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.36s 2026-03-24 02:30:08.824583 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.28s 2026-03-24 02:30:11.012736 | orchestrator | 2026-03-24 02:30:11 | INFO  | Task f926ea1d-38ae-452c-8666-5a59bd05a586 (memcached) was prepared for execution. 2026-03-24 02:30:11.012826 | orchestrator | 2026-03-24 02:30:11 | INFO  | It takes a moment until task f926ea1d-38ae-452c-8666-5a59bd05a586 (memcached) has been started and output is visible here. 2026-03-24 02:30:22.630979 | orchestrator | 2026-03-24 02:30:22.631111 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:30:22.631130 | orchestrator | 2026-03-24 02:30:22.631143 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:30:22.631154 | orchestrator | Tuesday 24 March 2026 02:30:15 +0000 (0:00:00.240) 0:00:00.240 ********* 2026-03-24 02:30:22.631166 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:30:22.631179 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:30:22.631189 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:30:22.631200 | orchestrator | 2026-03-24 02:30:22.631211 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:30:22.631222 | orchestrator | Tuesday 24 March 2026 02:30:15 +0000 (0:00:00.289) 0:00:00.530 ********* 2026-03-24 02:30:22.631233 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-24 02:30:22.631244 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-24 02:30:22.631255 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-24 02:30:22.631265 | orchestrator | 2026-03-24 02:30:22.631276 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-24 02:30:22.631287 | orchestrator | 2026-03-24 02:30:22.631298 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-24 02:30:22.631336 | orchestrator | Tuesday 24 March 2026 02:30:15 +0000 (0:00:00.387) 0:00:00.917 ********* 2026-03-24 02:30:22.631347 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:30:22.631359 | orchestrator | 2026-03-24 02:30:22.631370 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-24 02:30:22.631380 | orchestrator | Tuesday 24 March 2026 02:30:16 +0000 (0:00:00.462) 0:00:01.379 ********* 2026-03-24 02:30:22.631391 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-24 02:30:22.631402 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-24 02:30:22.631413 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-24 02:30:22.631423 | orchestrator | 2026-03-24 02:30:22.631434 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-24 02:30:22.631445 | orchestrator | Tuesday 24 March 2026 02:30:16 +0000 (0:00:00.691) 0:00:02.070 ********* 2026-03-24 02:30:22.631458 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-24 02:30:22.631470 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-24 02:30:22.631482 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-24 02:30:22.631495 | orchestrator | 2026-03-24 02:30:22.631507 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-24 02:30:22.631519 | orchestrator | Tuesday 24 March 2026 02:30:18 +0000 (0:00:01.653) 0:00:03.724 ********* 2026-03-24 02:30:22.631531 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:22.631543 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:30:22.631556 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:30:22.631568 | orchestrator | 2026-03-24 02:30:22.631595 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-24 02:30:22.631607 | orchestrator | Tuesday 24 March 2026 02:30:20 +0000 (0:00:01.474) 0:00:05.199 ********* 2026-03-24 02:30:22.631617 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:22.631628 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:30:22.631638 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:30:22.631649 | orchestrator | 2026-03-24 02:30:22.631660 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:30:22.631671 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:30:22.631683 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:30:22.631693 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:30:22.631704 | orchestrator | 2026-03-24 02:30:22.631714 | orchestrator | 2026-03-24 02:30:22.631725 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:30:22.631736 | orchestrator | Tuesday 24 March 2026 02:30:22 +0000 (0:00:02.108) 0:00:07.308 ********* 2026-03-24 02:30:22.631746 | orchestrator | =============================================================================== 2026-03-24 02:30:22.631757 | orchestrator | memcached : Restart memcached container --------------------------------- 2.11s 2026-03-24 02:30:22.631767 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.65s 2026-03-24 02:30:22.631778 | orchestrator | memcached : Check memcached container ----------------------------------- 1.47s 2026-03-24 02:30:22.631789 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.69s 2026-03-24 02:30:22.631799 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.46s 2026-03-24 02:30:22.631810 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-03-24 02:30:22.631821 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-24 02:30:24.887660 | orchestrator | 2026-03-24 02:30:24 | INFO  | Task 3c681624-6b30-4b12-91fc-6c62760e1810 (redis) was prepared for execution. 2026-03-24 02:30:24.887787 | orchestrator | 2026-03-24 02:30:24 | INFO  | It takes a moment until task 3c681624-6b30-4b12-91fc-6c62760e1810 (redis) has been started and output is visible here. 2026-03-24 02:30:32.640763 | orchestrator | 2026-03-24 02:30:32.641006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:30:32.641044 | orchestrator | 2026-03-24 02:30:32.641064 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:30:32.641084 | orchestrator | Tuesday 24 March 2026 02:30:28 +0000 (0:00:00.183) 0:00:00.183 ********* 2026-03-24 02:30:32.641102 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:30:32.641120 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:30:32.641137 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:30:32.641155 | orchestrator | 2026-03-24 02:30:32.641172 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:30:32.641189 | orchestrator | Tuesday 24 March 2026 02:30:28 +0000 (0:00:00.221) 0:00:00.404 ********* 2026-03-24 02:30:32.641207 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-24 02:30:32.641226 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-24 02:30:32.641243 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-24 02:30:32.641262 | orchestrator | 2026-03-24 02:30:32.641282 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-24 02:30:32.641301 | orchestrator | 2026-03-24 02:30:32.641320 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-24 02:30:32.641337 | orchestrator | Tuesday 24 March 2026 02:30:28 +0000 (0:00:00.294) 0:00:00.699 ********* 2026-03-24 02:30:32.641357 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:30:32.641371 | orchestrator | 2026-03-24 02:30:32.641382 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-24 02:30:32.641393 | orchestrator | Tuesday 24 March 2026 02:30:29 +0000 (0:00:00.343) 0:00:01.042 ********* 2026-03-24 02:30:32.641413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641594 | orchestrator | 2026-03-24 02:30:32.641607 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-24 02:30:32.641626 | orchestrator | Tuesday 24 March 2026 02:30:30 +0000 (0:00:01.113) 0:00:02.156 ********* 2026-03-24 02:30:32.641648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:32.641860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712271 | orchestrator | 2026-03-24 02:30:36.712289 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-24 02:30:36.712302 | orchestrator | Tuesday 24 March 2026 02:30:32 +0000 (0:00:02.203) 0:00:04.359 ********* 2026-03-24 02:30:36.712316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712446 | orchestrator | 2026-03-24 02:30:36.712459 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-24 02:30:36.712470 | orchestrator | Tuesday 24 March 2026 02:30:35 +0000 (0:00:02.421) 0:00:06.781 ********* 2026-03-24 02:30:36.712481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:36.712600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 02:30:42.696506 | orchestrator | 2026-03-24 02:30:42.696585 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-24 02:30:42.696593 | orchestrator | Tuesday 24 March 2026 02:30:36 +0000 (0:00:01.459) 0:00:08.240 ********* 2026-03-24 02:30:42.696597 | orchestrator | 2026-03-24 02:30:42.696601 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-24 02:30:42.696606 | orchestrator | Tuesday 24 March 2026 02:30:36 +0000 (0:00:00.061) 0:00:08.302 ********* 2026-03-24 02:30:42.696610 | orchestrator | 2026-03-24 02:30:42.696614 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-24 02:30:42.696618 | orchestrator | Tuesday 24 March 2026 02:30:36 +0000 (0:00:00.061) 0:00:08.363 ********* 2026-03-24 02:30:42.696622 | orchestrator | 2026-03-24 02:30:42.696626 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-24 02:30:42.696629 | orchestrator | Tuesday 24 March 2026 02:30:36 +0000 (0:00:00.064) 0:00:08.428 ********* 2026-03-24 02:30:42.696633 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:42.696638 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:30:42.696642 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:30:42.696645 | orchestrator | 2026-03-24 02:30:42.696649 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-24 02:30:42.696653 | orchestrator | Tuesday 24 March 2026 02:30:39 +0000 (0:00:02.719) 0:00:11.147 ********* 2026-03-24 02:30:42.696657 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:42.696660 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:30:42.696685 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:30:42.696689 | orchestrator | 2026-03-24 02:30:42.696693 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:30:42.696697 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:30:42.696703 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:30:42.696707 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:30:42.696710 | orchestrator | 2026-03-24 02:30:42.696714 | orchestrator | 2026-03-24 02:30:42.696731 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:30:42.696737 | orchestrator | Tuesday 24 March 2026 02:30:42 +0000 (0:00:02.962) 0:00:14.110 ********* 2026-03-24 02:30:42.696743 | orchestrator | =============================================================================== 2026-03-24 02:30:42.696748 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 2.96s 2026-03-24 02:30:42.696754 | orchestrator | redis : Restart redis container ----------------------------------------- 2.72s 2026-03-24 02:30:42.696759 | orchestrator | redis : Copying over redis config files --------------------------------- 2.42s 2026-03-24 02:30:42.696766 | orchestrator | redis : Copying over default config.json files -------------------------- 2.20s 2026-03-24 02:30:42.696772 | orchestrator | redis : Check redis containers ------------------------------------------ 1.46s 2026-03-24 02:30:42.696778 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.11s 2026-03-24 02:30:42.696784 | orchestrator | redis : include_tasks --------------------------------------------------- 0.34s 2026-03-24 02:30:42.696789 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2026-03-24 02:30:42.696796 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.22s 2026-03-24 02:30:42.696801 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2026-03-24 02:30:44.909784 | orchestrator | 2026-03-24 02:30:44 | INFO  | Task 87f397ff-7503-4deb-b463-4cb464fe3057 (mariadb) was prepared for execution. 2026-03-24 02:30:44.909865 | orchestrator | 2026-03-24 02:30:44 | INFO  | It takes a moment until task 87f397ff-7503-4deb-b463-4cb464fe3057 (mariadb) has been started and output is visible here. 2026-03-24 02:30:57.331169 | orchestrator | 2026-03-24 02:30:57.331305 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:30:57.331333 | orchestrator | 2026-03-24 02:30:57.331352 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:30:57.331369 | orchestrator | Tuesday 24 March 2026 02:30:48 +0000 (0:00:00.154) 0:00:00.154 ********* 2026-03-24 02:30:57.331386 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:30:57.331404 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:30:57.331423 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:30:57.331441 | orchestrator | 2026-03-24 02:30:57.331459 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:30:57.331477 | orchestrator | Tuesday 24 March 2026 02:30:49 +0000 (0:00:00.291) 0:00:00.445 ********* 2026-03-24 02:30:57.331496 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-24 02:30:57.331516 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-24 02:30:57.331533 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-24 02:30:57.331552 | orchestrator | 2026-03-24 02:30:57.331570 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-24 02:30:57.331588 | orchestrator | 2026-03-24 02:30:57.331606 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-24 02:30:57.331625 | orchestrator | Tuesday 24 March 2026 02:30:49 +0000 (0:00:00.438) 0:00:00.884 ********* 2026-03-24 02:30:57.331681 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 02:30:57.331703 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 02:30:57.331721 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 02:30:57.331739 | orchestrator | 2026-03-24 02:30:57.331750 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 02:30:57.331764 | orchestrator | Tuesday 24 March 2026 02:30:49 +0000 (0:00:00.326) 0:00:01.210 ********* 2026-03-24 02:30:57.331784 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:30:57.331802 | orchestrator | 2026-03-24 02:30:57.331820 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-24 02:30:57.331840 | orchestrator | Tuesday 24 March 2026 02:30:50 +0000 (0:00:00.432) 0:00:01.642 ********* 2026-03-24 02:30:57.331885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:30:57.331964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:30:57.331998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:30:57.332011 | orchestrator | 2026-03-24 02:30:57.332023 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-24 02:30:57.332035 | orchestrator | Tuesday 24 March 2026 02:30:52 +0000 (0:00:02.186) 0:00:03.829 ********* 2026-03-24 02:30:57.332048 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:30:57.332061 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:57.332073 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:30:57.332086 | orchestrator | 2026-03-24 02:30:57.332098 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-24 02:30:57.332110 | orchestrator | Tuesday 24 March 2026 02:30:53 +0000 (0:00:00.585) 0:00:04.414 ********* 2026-03-24 02:30:57.332123 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:30:57.332135 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:30:57.332148 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:30:57.332159 | orchestrator | 2026-03-24 02:30:57.332171 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-24 02:30:57.332184 | orchestrator | Tuesday 24 March 2026 02:30:54 +0000 (0:00:01.357) 0:00:05.772 ********* 2026-03-24 02:30:57.332208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:31:04.181374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:31:04.181496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:31:04.181532 | orchestrator | 2026-03-24 02:31:04.181543 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-24 02:31:04.181552 | orchestrator | Tuesday 24 March 2026 02:30:57 +0000 (0:00:02.894) 0:00:08.666 ********* 2026-03-24 02:31:04.181561 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:31:04.181569 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:31:04.181577 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:31:04.181585 | orchestrator | 2026-03-24 02:31:04.181594 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-24 02:31:04.181616 | orchestrator | Tuesday 24 March 2026 02:30:58 +0000 (0:00:01.062) 0:00:09.729 ********* 2026-03-24 02:31:04.181625 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:31:04.181633 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:31:04.181641 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:31:04.181648 | orchestrator | 2026-03-24 02:31:04.181657 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 02:31:04.181665 | orchestrator | Tuesday 24 March 2026 02:31:01 +0000 (0:00:03.475) 0:00:13.205 ********* 2026-03-24 02:31:04.181674 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:31:04.181682 | orchestrator | 2026-03-24 02:31:04.181690 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-24 02:31:04.181698 | orchestrator | Tuesday 24 March 2026 02:31:02 +0000 (0:00:00.433) 0:00:13.638 ********* 2026-03-24 02:31:04.181713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:04.181728 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:31:04.181743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:08.311299 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:31:08.311452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:08.311516 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:31:08.311538 | orchestrator | 2026-03-24 02:31:08.311557 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-24 02:31:08.311577 | orchestrator | Tuesday 24 March 2026 02:31:04 +0000 (0:00:01.878) 0:00:15.516 ********* 2026-03-24 02:31:08.311597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:08.311615 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:31:08.311670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:08.311706 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:31:08.311725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:08.311744 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:31:08.311764 | orchestrator | 2026-03-24 02:31:08.311787 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-24 02:31:08.311808 | orchestrator | Tuesday 24 March 2026 02:31:06 +0000 (0:00:02.119) 0:00:17.635 ********* 2026-03-24 02:31:08.311857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:10.891521 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:31:10.891665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:10.891698 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:31:10.891740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 02:31:10.891760 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:31:10.891778 | orchestrator | 2026-03-24 02:31:10.891798 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-24 02:31:10.891853 | orchestrator | Tuesday 24 March 2026 02:31:08 +0000 (0:00:02.018) 0:00:19.654 ********* 2026-03-24 02:31:10.891899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:31:10.891922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:31:10.892001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 02:33:18.918274 | orchestrator | 2026-03-24 02:33:18.918372 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-24 02:33:18.918385 | orchestrator | Tuesday 24 March 2026 02:31:10 +0000 (0:00:02.576) 0:00:22.231 ********* 2026-03-24 02:33:18.918394 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:18.918403 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:33:18.918411 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:33:18.918419 | orchestrator | 2026-03-24 02:33:18.918428 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-24 02:33:18.918436 | orchestrator | Tuesday 24 March 2026 02:31:11 +0000 (0:00:00.808) 0:00:23.040 ********* 2026-03-24 02:33:18.918444 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.918453 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:18.918461 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:18.918469 | orchestrator | 2026-03-24 02:33:18.918477 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-24 02:33:18.918485 | orchestrator | Tuesday 24 March 2026 02:31:12 +0000 (0:00:00.531) 0:00:23.571 ********* 2026-03-24 02:33:18.918493 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.918500 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:18.918508 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:18.918516 | orchestrator | 2026-03-24 02:33:18.918524 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-24 02:33:18.918532 | orchestrator | Tuesday 24 March 2026 02:31:12 +0000 (0:00:00.346) 0:00:23.917 ********* 2026-03-24 02:33:18.918541 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-24 02:33:18.918550 | orchestrator | ...ignoring 2026-03-24 02:33:18.918558 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-24 02:33:18.918566 | orchestrator | ...ignoring 2026-03-24 02:33:18.918574 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-24 02:33:18.918582 | orchestrator | ...ignoring 2026-03-24 02:33:18.918590 | orchestrator | 2026-03-24 02:33:18.918598 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-24 02:33:18.918626 | orchestrator | Tuesday 24 March 2026 02:31:23 +0000 (0:00:10.851) 0:00:34.769 ********* 2026-03-24 02:33:18.918634 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.918642 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:18.918650 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:18.918658 | orchestrator | 2026-03-24 02:33:18.918666 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-24 02:33:18.918674 | orchestrator | Tuesday 24 March 2026 02:31:23 +0000 (0:00:00.346) 0:00:35.116 ********* 2026-03-24 02:33:18.918682 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:18.918689 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:18.918697 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:18.918705 | orchestrator | 2026-03-24 02:33:18.918713 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-24 02:33:18.918721 | orchestrator | Tuesday 24 March 2026 02:31:24 +0000 (0:00:00.520) 0:00:35.637 ********* 2026-03-24 02:33:18.918729 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:18.918737 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:18.918744 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:18.918752 | orchestrator | 2026-03-24 02:33:18.918760 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-24 02:33:18.918768 | orchestrator | Tuesday 24 March 2026 02:31:24 +0000 (0:00:00.414) 0:00:36.051 ********* 2026-03-24 02:33:18.918788 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:18.918797 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:18.918805 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:18.918813 | orchestrator | 2026-03-24 02:33:18.918821 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-24 02:33:18.918829 | orchestrator | Tuesday 24 March 2026 02:31:25 +0000 (0:00:00.393) 0:00:36.445 ********* 2026-03-24 02:33:18.918837 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.918844 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:18.918852 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:18.918860 | orchestrator | 2026-03-24 02:33:18.918868 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-24 02:33:18.918876 | orchestrator | Tuesday 24 March 2026 02:31:25 +0000 (0:00:00.397) 0:00:36.842 ********* 2026-03-24 02:33:18.918884 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:18.918892 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:18.918900 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:18.918908 | orchestrator | 2026-03-24 02:33:18.918916 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 02:33:18.918923 | orchestrator | Tuesday 24 March 2026 02:31:26 +0000 (0:00:00.564) 0:00:37.407 ********* 2026-03-24 02:33:18.918931 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:18.918939 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:18.918947 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-24 02:33:18.918955 | orchestrator | 2026-03-24 02:33:18.918963 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-24 02:33:18.918971 | orchestrator | Tuesday 24 March 2026 02:31:26 +0000 (0:00:00.350) 0:00:37.757 ********* 2026-03-24 02:33:18.918979 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:18.918986 | orchestrator | 2026-03-24 02:33:18.918994 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-24 02:33:18.919002 | orchestrator | Tuesday 24 March 2026 02:31:36 +0000 (0:00:09.952) 0:00:47.710 ********* 2026-03-24 02:33:18.919010 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.919017 | orchestrator | 2026-03-24 02:33:18.919026 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 02:33:18.919033 | orchestrator | Tuesday 24 March 2026 02:31:36 +0000 (0:00:00.131) 0:00:47.842 ********* 2026-03-24 02:33:18.919042 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:18.919063 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:18.919072 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:18.919107 | orchestrator | 2026-03-24 02:33:18.919116 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-24 02:33:18.919124 | orchestrator | Tuesday 24 March 2026 02:31:37 +0000 (0:00:00.916) 0:00:48.759 ********* 2026-03-24 02:33:18.919132 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:18.919140 | orchestrator | 2026-03-24 02:33:18.919147 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-24 02:33:18.919155 | orchestrator | Tuesday 24 March 2026 02:31:44 +0000 (0:00:07.207) 0:00:55.966 ********* 2026-03-24 02:33:18.919163 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.919171 | orchestrator | 2026-03-24 02:33:18.919179 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-24 02:33:18.919187 | orchestrator | Tuesday 24 March 2026 02:31:46 +0000 (0:00:01.654) 0:00:57.621 ********* 2026-03-24 02:33:18.919194 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.919202 | orchestrator | 2026-03-24 02:33:18.919210 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-24 02:33:18.919218 | orchestrator | Tuesday 24 March 2026 02:31:48 +0000 (0:00:02.293) 0:00:59.915 ********* 2026-03-24 02:33:18.919226 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:18.919234 | orchestrator | 2026-03-24 02:33:18.919241 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-24 02:33:18.919249 | orchestrator | Tuesday 24 March 2026 02:31:48 +0000 (0:00:00.120) 0:01:00.035 ********* 2026-03-24 02:33:18.919257 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:18.919265 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:18.919273 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:18.919281 | orchestrator | 2026-03-24 02:33:18.919289 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-24 02:33:18.919297 | orchestrator | Tuesday 24 March 2026 02:31:48 +0000 (0:00:00.292) 0:01:00.328 ********* 2026-03-24 02:33:18.919304 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:18.919312 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-24 02:33:18.919320 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:33:18.919328 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:33:18.919335 | orchestrator | 2026-03-24 02:33:18.919343 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-24 02:33:18.919351 | orchestrator | skipping: no hosts matched 2026-03-24 02:33:18.919359 | orchestrator | 2026-03-24 02:33:18.919367 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-24 02:33:18.919375 | orchestrator | 2026-03-24 02:33:18.919383 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-24 02:33:18.919390 | orchestrator | Tuesday 24 March 2026 02:31:49 +0000 (0:00:00.481) 0:01:00.810 ********* 2026-03-24 02:33:18.919398 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:33:18.919406 | orchestrator | 2026-03-24 02:33:18.919414 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-24 02:33:18.919422 | orchestrator | Tuesday 24 March 2026 02:32:06 +0000 (0:00:17.040) 0:01:17.851 ********* 2026-03-24 02:33:18.919429 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:18.919437 | orchestrator | 2026-03-24 02:33:18.919445 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-24 02:33:18.919453 | orchestrator | Tuesday 24 March 2026 02:32:23 +0000 (0:00:16.575) 0:01:34.426 ********* 2026-03-24 02:33:18.919461 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:18.919468 | orchestrator | 2026-03-24 02:33:18.919476 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-24 02:33:18.919488 | orchestrator | 2026-03-24 02:33:18.919496 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-24 02:33:18.919508 | orchestrator | Tuesday 24 March 2026 02:32:25 +0000 (0:00:02.290) 0:01:36.716 ********* 2026-03-24 02:33:18.919517 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:33:18.919525 | orchestrator | 2026-03-24 02:33:18.919533 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-24 02:33:18.919546 | orchestrator | Tuesday 24 March 2026 02:32:41 +0000 (0:00:16.232) 0:01:52.949 ********* 2026-03-24 02:33:18.919553 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:18.919561 | orchestrator | 2026-03-24 02:33:18.919569 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-24 02:33:18.919577 | orchestrator | Tuesday 24 March 2026 02:32:58 +0000 (0:00:16.582) 0:02:09.531 ********* 2026-03-24 02:33:18.919585 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:18.919592 | orchestrator | 2026-03-24 02:33:18.919600 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-24 02:33:18.919608 | orchestrator | 2026-03-24 02:33:18.919616 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-24 02:33:18.919624 | orchestrator | Tuesday 24 March 2026 02:33:00 +0000 (0:00:02.347) 0:02:11.879 ********* 2026-03-24 02:33:18.919632 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:18.919639 | orchestrator | 2026-03-24 02:33:18.919647 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-24 02:33:18.919655 | orchestrator | Tuesday 24 March 2026 02:33:10 +0000 (0:00:09.822) 0:02:21.702 ********* 2026-03-24 02:33:18.919663 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.919671 | orchestrator | 2026-03-24 02:33:18.919679 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-24 02:33:18.919686 | orchestrator | Tuesday 24 March 2026 02:33:15 +0000 (0:00:05.568) 0:02:27.270 ********* 2026-03-24 02:33:18.919694 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:18.919702 | orchestrator | 2026-03-24 02:33:18.919710 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-24 02:33:18.919718 | orchestrator | 2026-03-24 02:33:18.919725 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-24 02:33:18.919733 | orchestrator | Tuesday 24 March 2026 02:33:18 +0000 (0:00:02.357) 0:02:29.628 ********* 2026-03-24 02:33:18.919741 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:33:18.919749 | orchestrator | 2026-03-24 02:33:18.919757 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-24 02:33:18.919770 | orchestrator | Tuesday 24 March 2026 02:33:18 +0000 (0:00:00.627) 0:02:30.255 ********* 2026-03-24 02:33:31.425523 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:31.425657 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:31.425682 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:31.425703 | orchestrator | 2026-03-24 02:33:31.425723 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-24 02:33:31.425743 | orchestrator | Tuesday 24 March 2026 02:33:21 +0000 (0:00:02.294) 0:02:32.550 ********* 2026-03-24 02:33:31.425762 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:31.425781 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:31.425799 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:31.425818 | orchestrator | 2026-03-24 02:33:31.425837 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-24 02:33:31.425855 | orchestrator | Tuesday 24 March 2026 02:33:23 +0000 (0:00:02.199) 0:02:34.750 ********* 2026-03-24 02:33:31.425873 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:31.425884 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:31.425895 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:31.425906 | orchestrator | 2026-03-24 02:33:31.425917 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-24 02:33:31.425928 | orchestrator | Tuesday 24 March 2026 02:33:25 +0000 (0:00:02.417) 0:02:37.167 ********* 2026-03-24 02:33:31.425939 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:31.425950 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:31.425961 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:33:31.425972 | orchestrator | 2026-03-24 02:33:31.425982 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-24 02:33:31.426156 | orchestrator | Tuesday 24 March 2026 02:33:28 +0000 (0:00:02.249) 0:02:39.417 ********* 2026-03-24 02:33:31.426173 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:31.426187 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:31.426198 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:31.426210 | orchestrator | 2026-03-24 02:33:31.426223 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-24 02:33:31.426235 | orchestrator | Tuesday 24 March 2026 02:33:30 +0000 (0:00:02.695) 0:02:42.112 ********* 2026-03-24 02:33:31.426248 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:31.426260 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:33:31.426272 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:33:31.426284 | orchestrator | 2026-03-24 02:33:31.426295 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:33:31.426307 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-24 02:33:31.426320 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-24 02:33:31.426331 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-24 02:33:31.426342 | orchestrator | 2026-03-24 02:33:31.426352 | orchestrator | 2026-03-24 02:33:31.426363 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:33:31.426374 | orchestrator | Tuesday 24 March 2026 02:33:31 +0000 (0:00:00.376) 0:02:42.489 ********* 2026-03-24 02:33:31.426385 | orchestrator | =============================================================================== 2026-03-24 02:33:31.426396 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 33.27s 2026-03-24 02:33:31.426421 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.16s 2026-03-24 02:33:31.426433 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.85s 2026-03-24 02:33:31.426443 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.95s 2026-03-24 02:33:31.426454 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 9.82s 2026-03-24 02:33:31.426465 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.21s 2026-03-24 02:33:31.426476 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.57s 2026-03-24 02:33:31.426487 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.64s 2026-03-24 02:33:31.426498 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.48s 2026-03-24 02:33:31.426509 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.89s 2026-03-24 02:33:31.426520 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.70s 2026-03-24 02:33:31.426530 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.58s 2026-03-24 02:33:31.426541 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.42s 2026-03-24 02:33:31.426552 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.36s 2026-03-24 02:33:31.426562 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.29s 2026-03-24 02:33:31.426573 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.29s 2026-03-24 02:33:31.426584 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.25s 2026-03-24 02:33:31.426595 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.20s 2026-03-24 02:33:31.426606 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.19s 2026-03-24 02:33:31.426616 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.12s 2026-03-24 02:33:33.573613 | orchestrator | 2026-03-24 02:33:33 | INFO  | Task d0a8c887-4832-4a81-bce6-58a72f34ddea (rabbitmq) was prepared for execution. 2026-03-24 02:33:33.573752 | orchestrator | 2026-03-24 02:33:33 | INFO  | It takes a moment until task d0a8c887-4832-4a81-bce6-58a72f34ddea (rabbitmq) has been started and output is visible here. 2026-03-24 02:33:45.536069 | orchestrator | 2026-03-24 02:33:45.536284 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:33:45.536302 | orchestrator | 2026-03-24 02:33:45.536314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:33:45.536326 | orchestrator | Tuesday 24 March 2026 02:33:37 +0000 (0:00:00.121) 0:00:00.121 ********* 2026-03-24 02:33:45.536337 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:45.536349 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:33:45.536359 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:33:45.536370 | orchestrator | 2026-03-24 02:33:45.536381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:33:45.536392 | orchestrator | Tuesday 24 March 2026 02:33:37 +0000 (0:00:00.238) 0:00:00.360 ********* 2026-03-24 02:33:45.536403 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-24 02:33:45.536414 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-24 02:33:45.536425 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-24 02:33:45.536436 | orchestrator | 2026-03-24 02:33:45.536447 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-24 02:33:45.536457 | orchestrator | 2026-03-24 02:33:45.536469 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-24 02:33:45.536480 | orchestrator | Tuesday 24 March 2026 02:33:38 +0000 (0:00:00.472) 0:00:00.832 ********* 2026-03-24 02:33:45.536492 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:33:45.536504 | orchestrator | 2026-03-24 02:33:45.536515 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-24 02:33:45.536526 | orchestrator | Tuesday 24 March 2026 02:33:38 +0000 (0:00:00.453) 0:00:01.285 ********* 2026-03-24 02:33:45.536563 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:45.536576 | orchestrator | 2026-03-24 02:33:45.536588 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-24 02:33:45.536601 | orchestrator | Tuesday 24 March 2026 02:33:39 +0000 (0:00:00.873) 0:00:02.158 ********* 2026-03-24 02:33:45.536613 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:45.536627 | orchestrator | 2026-03-24 02:33:45.536639 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-24 02:33:45.536652 | orchestrator | Tuesday 24 March 2026 02:33:39 +0000 (0:00:00.311) 0:00:02.469 ********* 2026-03-24 02:33:45.536664 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:45.536676 | orchestrator | 2026-03-24 02:33:45.536689 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-24 02:33:45.536701 | orchestrator | Tuesday 24 March 2026 02:33:40 +0000 (0:00:00.309) 0:00:02.779 ********* 2026-03-24 02:33:45.536714 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:45.536726 | orchestrator | 2026-03-24 02:33:45.536738 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-24 02:33:45.536750 | orchestrator | Tuesday 24 March 2026 02:33:40 +0000 (0:00:00.313) 0:00:03.092 ********* 2026-03-24 02:33:45.536762 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:45.536774 | orchestrator | 2026-03-24 02:33:45.536787 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-24 02:33:45.536800 | orchestrator | Tuesday 24 March 2026 02:33:40 +0000 (0:00:00.443) 0:00:03.535 ********* 2026-03-24 02:33:45.536829 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:33:45.536843 | orchestrator | 2026-03-24 02:33:45.536855 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-24 02:33:45.536892 | orchestrator | Tuesday 24 March 2026 02:33:41 +0000 (0:00:00.733) 0:00:04.268 ********* 2026-03-24 02:33:45.536904 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:33:45.536915 | orchestrator | 2026-03-24 02:33:45.536925 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-24 02:33:45.536936 | orchestrator | Tuesday 24 March 2026 02:33:42 +0000 (0:00:00.794) 0:00:05.063 ********* 2026-03-24 02:33:45.536947 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:45.536964 | orchestrator | 2026-03-24 02:33:45.536982 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-24 02:33:45.537000 | orchestrator | Tuesday 24 March 2026 02:33:42 +0000 (0:00:00.400) 0:00:05.463 ********* 2026-03-24 02:33:45.537018 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:33:45.537036 | orchestrator | 2026-03-24 02:33:45.537054 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-24 02:33:45.537070 | orchestrator | Tuesday 24 March 2026 02:33:43 +0000 (0:00:00.379) 0:00:05.843 ********* 2026-03-24 02:33:45.537143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:33:45.537173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:33:45.537195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:33:45.537230 | orchestrator | 2026-03-24 02:33:45.537248 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-24 02:33:45.537277 | orchestrator | Tuesday 24 March 2026 02:33:43 +0000 (0:00:00.857) 0:00:06.700 ********* 2026-03-24 02:33:45.537298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:33:45.537332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:34:03.581312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:34:03.581428 | orchestrator | 2026-03-24 02:34:03.581445 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-24 02:34:03.581458 | orchestrator | Tuesday 24 March 2026 02:33:45 +0000 (0:00:01.546) 0:00:08.247 ********* 2026-03-24 02:34:03.581468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-24 02:34:03.581502 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-24 02:34:03.581513 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-24 02:34:03.581522 | orchestrator | 2026-03-24 02:34:03.581532 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-24 02:34:03.581542 | orchestrator | Tuesday 24 March 2026 02:33:46 +0000 (0:00:01.387) 0:00:09.634 ********* 2026-03-24 02:34:03.581551 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-24 02:34:03.581562 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-24 02:34:03.581584 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-24 02:34:03.581594 | orchestrator | 2026-03-24 02:34:03.581603 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-24 02:34:03.581613 | orchestrator | Tuesday 24 March 2026 02:33:48 +0000 (0:00:01.619) 0:00:11.254 ********* 2026-03-24 02:34:03.581623 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-24 02:34:03.581632 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-24 02:34:03.581642 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-24 02:34:03.581651 | orchestrator | 2026-03-24 02:34:03.581661 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-24 02:34:03.581670 | orchestrator | Tuesday 24 March 2026 02:33:49 +0000 (0:00:01.331) 0:00:12.586 ********* 2026-03-24 02:34:03.581679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-24 02:34:03.581689 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-24 02:34:03.581699 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-24 02:34:03.581708 | orchestrator | 2026-03-24 02:34:03.581718 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-24 02:34:03.581727 | orchestrator | Tuesday 24 March 2026 02:33:51 +0000 (0:00:01.561) 0:00:14.148 ********* 2026-03-24 02:34:03.581739 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-24 02:34:03.581751 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-24 02:34:03.581762 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-24 02:34:03.581774 | orchestrator | 2026-03-24 02:34:03.581785 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-24 02:34:03.581797 | orchestrator | Tuesday 24 March 2026 02:33:52 +0000 (0:00:01.359) 0:00:15.507 ********* 2026-03-24 02:34:03.581808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-24 02:34:03.581819 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-24 02:34:03.581830 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-24 02:34:03.581841 | orchestrator | 2026-03-24 02:34:03.581852 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-24 02:34:03.581864 | orchestrator | Tuesday 24 March 2026 02:33:54 +0000 (0:00:01.369) 0:00:16.877 ********* 2026-03-24 02:34:03.581877 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:34:03.581889 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:34:03.581918 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:34:03.581929 | orchestrator | 2026-03-24 02:34:03.581941 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-24 02:34:03.581960 | orchestrator | Tuesday 24 March 2026 02:33:54 +0000 (0:00:00.375) 0:00:17.252 ********* 2026-03-24 02:34:03.581973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:34:03.581992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:34:03.582006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 02:34:03.582078 | orchestrator | 2026-03-24 02:34:03.582092 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-24 02:34:03.582110 | orchestrator | Tuesday 24 March 2026 02:33:55 +0000 (0:00:01.094) 0:00:18.346 ********* 2026-03-24 02:34:03.582154 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:34:03.582173 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:34:03.582189 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:34:03.582205 | orchestrator | 2026-03-24 02:34:03.582221 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-24 02:34:03.582236 | orchestrator | Tuesday 24 March 2026 02:33:56 +0000 (0:00:00.917) 0:00:19.263 ********* 2026-03-24 02:34:03.582264 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:34:03.582281 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:34:03.582297 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:34:03.582313 | orchestrator | 2026-03-24 02:34:03.582331 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-24 02:34:03.582359 | orchestrator | Tuesday 24 March 2026 02:34:03 +0000 (0:00:07.027) 0:00:26.291 ********* 2026-03-24 02:35:42.710712 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:35:42.710863 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:35:42.710882 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:35:42.710895 | orchestrator | 2026-03-24 02:35:42.710909 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-24 02:35:42.710921 | orchestrator | 2026-03-24 02:35:42.710933 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-24 02:35:42.710944 | orchestrator | Tuesday 24 March 2026 02:34:04 +0000 (0:00:00.472) 0:00:26.763 ********* 2026-03-24 02:35:42.710955 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:35:42.710967 | orchestrator | 2026-03-24 02:35:42.710977 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-24 02:35:42.710988 | orchestrator | Tuesday 24 March 2026 02:34:04 +0000 (0:00:00.606) 0:00:27.370 ********* 2026-03-24 02:35:42.710999 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:35:42.711010 | orchestrator | 2026-03-24 02:35:42.711021 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-24 02:35:42.711032 | orchestrator | Tuesday 24 March 2026 02:34:04 +0000 (0:00:00.220) 0:00:27.591 ********* 2026-03-24 02:35:42.711043 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:35:42.711053 | orchestrator | 2026-03-24 02:35:42.711064 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-24 02:35:42.711075 | orchestrator | Tuesday 24 March 2026 02:34:11 +0000 (0:00:06.674) 0:00:34.265 ********* 2026-03-24 02:35:42.711086 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:35:42.711097 | orchestrator | 2026-03-24 02:35:42.711108 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-24 02:35:42.711119 | orchestrator | 2026-03-24 02:35:42.711130 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-24 02:35:42.711141 | orchestrator | Tuesday 24 March 2026 02:35:03 +0000 (0:00:51.584) 0:01:25.850 ********* 2026-03-24 02:35:42.711152 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:35:42.711163 | orchestrator | 2026-03-24 02:35:42.711173 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-24 02:35:42.711184 | orchestrator | Tuesday 24 March 2026 02:35:03 +0000 (0:00:00.572) 0:01:26.422 ********* 2026-03-24 02:35:42.711195 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:35:42.711206 | orchestrator | 2026-03-24 02:35:42.711217 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-24 02:35:42.711229 | orchestrator | Tuesday 24 March 2026 02:35:03 +0000 (0:00:00.198) 0:01:26.620 ********* 2026-03-24 02:35:42.711346 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:35:42.711361 | orchestrator | 2026-03-24 02:35:42.711374 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-24 02:35:42.711386 | orchestrator | Tuesday 24 March 2026 02:35:05 +0000 (0:00:01.555) 0:01:28.176 ********* 2026-03-24 02:35:42.711399 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:35:42.711412 | orchestrator | 2026-03-24 02:35:42.711447 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-24 02:35:42.711468 | orchestrator | 2026-03-24 02:35:42.711488 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-24 02:35:42.711508 | orchestrator | Tuesday 24 March 2026 02:35:21 +0000 (0:00:15.648) 0:01:43.825 ********* 2026-03-24 02:35:42.711527 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:35:42.711546 | orchestrator | 2026-03-24 02:35:42.711566 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-24 02:35:42.711586 | orchestrator | Tuesday 24 March 2026 02:35:21 +0000 (0:00:00.844) 0:01:44.670 ********* 2026-03-24 02:35:42.711637 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:35:42.711660 | orchestrator | 2026-03-24 02:35:42.711680 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-24 02:35:42.711698 | orchestrator | Tuesday 24 March 2026 02:35:22 +0000 (0:00:00.222) 0:01:44.892 ********* 2026-03-24 02:35:42.711717 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:35:42.711735 | orchestrator | 2026-03-24 02:35:42.711755 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-24 02:35:42.711774 | orchestrator | Tuesday 24 March 2026 02:35:23 +0000 (0:00:01.594) 0:01:46.487 ********* 2026-03-24 02:35:42.711794 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:35:42.711814 | orchestrator | 2026-03-24 02:35:42.711832 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-24 02:35:42.711843 | orchestrator | 2026-03-24 02:35:42.711854 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-24 02:35:42.711865 | orchestrator | Tuesday 24 March 2026 02:35:39 +0000 (0:00:15.625) 0:02:02.112 ********* 2026-03-24 02:35:42.711876 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:35:42.711887 | orchestrator | 2026-03-24 02:35:42.711898 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-24 02:35:42.711908 | orchestrator | Tuesday 24 March 2026 02:35:39 +0000 (0:00:00.451) 0:02:02.564 ********* 2026-03-24 02:35:42.711919 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-24 02:35:42.711930 | orchestrator | enable_outward_rabbitmq_True 2026-03-24 02:35:42.711941 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-24 02:35:42.711952 | orchestrator | outward_rabbitmq_restart 2026-03-24 02:35:42.711962 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:35:42.711973 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:35:42.711984 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:35:42.711995 | orchestrator | 2026-03-24 02:35:42.712005 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-24 02:35:42.712016 | orchestrator | skipping: no hosts matched 2026-03-24 02:35:42.712027 | orchestrator | 2026-03-24 02:35:42.712038 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-24 02:35:42.712049 | orchestrator | skipping: no hosts matched 2026-03-24 02:35:42.712059 | orchestrator | 2026-03-24 02:35:42.712070 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-24 02:35:42.712081 | orchestrator | skipping: no hosts matched 2026-03-24 02:35:42.712092 | orchestrator | 2026-03-24 02:35:42.712103 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:35:42.712136 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-24 02:35:42.712149 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:35:42.712160 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:35:42.712171 | orchestrator | 2026-03-24 02:35:42.712182 | orchestrator | 2026-03-24 02:35:42.712193 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:35:42.712204 | orchestrator | Tuesday 24 March 2026 02:35:42 +0000 (0:00:02.553) 0:02:05.117 ********* 2026-03-24 02:35:42.712215 | orchestrator | =============================================================================== 2026-03-24 02:35:42.712226 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 82.86s 2026-03-24 02:35:42.712266 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.83s 2026-03-24 02:35:42.712287 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.03s 2026-03-24 02:35:42.712327 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.55s 2026-03-24 02:35:42.712353 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.02s 2026-03-24 02:35:42.712371 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.62s 2026-03-24 02:35:42.712388 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.56s 2026-03-24 02:35:42.712406 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.55s 2026-03-24 02:35:42.712421 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.39s 2026-03-24 02:35:42.712438 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.37s 2026-03-24 02:35:42.712456 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.36s 2026-03-24 02:35:42.712474 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.33s 2026-03-24 02:35:42.712493 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.09s 2026-03-24 02:35:42.712511 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.92s 2026-03-24 02:35:42.712530 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.87s 2026-03-24 02:35:42.712552 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.86s 2026-03-24 02:35:42.712565 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.79s 2026-03-24 02:35:42.712583 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.73s 2026-03-24 02:35:42.712610 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.64s 2026-03-24 02:35:42.712630 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-24 02:35:45.006401 | orchestrator | 2026-03-24 02:35:45 | INFO  | Task 6b99a2de-16b3-44b2-ae87-ca6f9ce7cfa0 (openvswitch) was prepared for execution. 2026-03-24 02:35:45.006495 | orchestrator | 2026-03-24 02:35:45 | INFO  | It takes a moment until task 6b99a2de-16b3-44b2-ae87-ca6f9ce7cfa0 (openvswitch) has been started and output is visible here. 2026-03-24 02:35:56.702398 | orchestrator | 2026-03-24 02:35:56.702506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:35:56.702519 | orchestrator | 2026-03-24 02:35:56.702528 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:35:56.702536 | orchestrator | Tuesday 24 March 2026 02:35:48 +0000 (0:00:00.236) 0:00:00.236 ********* 2026-03-24 02:35:56.702544 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:35:56.702553 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:35:56.702560 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:35:56.702567 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:35:56.702574 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:35:56.702581 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:35:56.702589 | orchestrator | 2026-03-24 02:35:56.702596 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:35:56.702603 | orchestrator | Tuesday 24 March 2026 02:35:49 +0000 (0:00:00.639) 0:00:00.876 ********* 2026-03-24 02:35:56.702611 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 02:35:56.702619 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 02:35:56.702626 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 02:35:56.702633 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 02:35:56.702640 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 02:35:56.702648 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 02:35:56.702655 | orchestrator | 2026-03-24 02:35:56.702662 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-24 02:35:56.702669 | orchestrator | 2026-03-24 02:35:56.702698 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-24 02:35:56.702706 | orchestrator | Tuesday 24 March 2026 02:35:50 +0000 (0:00:00.544) 0:00:01.420 ********* 2026-03-24 02:35:56.702715 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:35:56.702724 | orchestrator | 2026-03-24 02:35:56.702731 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-24 02:35:56.702738 | orchestrator | Tuesday 24 March 2026 02:35:51 +0000 (0:00:01.045) 0:00:02.465 ********* 2026-03-24 02:35:56.702746 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-24 02:35:56.702754 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-24 02:35:56.702761 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-24 02:35:56.702768 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-24 02:35:56.702776 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-24 02:35:56.702783 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-24 02:35:56.702790 | orchestrator | 2026-03-24 02:35:56.702797 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-24 02:35:56.702804 | orchestrator | Tuesday 24 March 2026 02:35:52 +0000 (0:00:01.142) 0:00:03.607 ********* 2026-03-24 02:35:56.702812 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-24 02:35:56.702819 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-24 02:35:56.702826 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-24 02:35:56.702833 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-24 02:35:56.702841 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-24 02:35:56.702848 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-24 02:35:56.702855 | orchestrator | 2026-03-24 02:35:56.702862 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-24 02:35:56.702869 | orchestrator | Tuesday 24 March 2026 02:35:53 +0000 (0:00:01.431) 0:00:05.039 ********* 2026-03-24 02:35:56.702879 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-24 02:35:56.702887 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:35:56.702900 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-24 02:35:56.702912 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:35:56.702924 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-24 02:35:56.702936 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:35:56.702949 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-24 02:35:56.702962 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:35:56.702973 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-24 02:35:56.702987 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:35:56.702999 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-24 02:35:56.703010 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:35:56.703023 | orchestrator | 2026-03-24 02:35:56.703031 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-24 02:35:56.703039 | orchestrator | Tuesday 24 March 2026 02:35:54 +0000 (0:00:01.070) 0:00:06.109 ********* 2026-03-24 02:35:56.703046 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:35:56.703053 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:35:56.703060 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:35:56.703068 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:35:56.703075 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:35:56.703082 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:35:56.703089 | orchestrator | 2026-03-24 02:35:56.703097 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-24 02:35:56.703104 | orchestrator | Tuesday 24 March 2026 02:35:55 +0000 (0:00:00.656) 0:00:06.766 ********* 2026-03-24 02:35:56.703130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:56.703150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:56.703159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:56.703232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:56.703247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:56.703295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:58.953771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:35:58.953897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:35:58.953923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:35:58.953945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:35:58.953988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:35:58.954139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:35:58.954158 | orchestrator | 2026-03-24 02:35:58.954171 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-24 02:35:58.954184 | orchestrator | Tuesday 24 March 2026 02:35:56 +0000 (0:00:01.295) 0:00:08.062 ********* 2026-03-24 02:35:58.954195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:58.954208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:58.954220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:58.954231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:58.954248 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:35:58.954303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423412 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423513 | orchestrator | 2026-03-24 02:36:01.423525 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-24 02:36:01.423536 | orchestrator | Tuesday 24 March 2026 02:35:59 +0000 (0:00:02.259) 0:00:10.321 ********* 2026-03-24 02:36:01.423546 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:36:01.423556 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:36:01.423566 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:36:01.423575 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:36:01.423585 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:36:01.423594 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:36:01.423604 | orchestrator | 2026-03-24 02:36:01.423614 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-24 02:36:01.423624 | orchestrator | Tuesday 24 March 2026 02:35:59 +0000 (0:00:00.775) 0:00:11.097 ********* 2026-03-24 02:36:01.423634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423678 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:36:01.423697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 02:36:21.645837 | orchestrator | 2026-03-24 02:36:21.645851 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 02:36:21.645863 | orchestrator | Tuesday 24 March 2026 02:36:01 +0000 (0:00:01.690) 0:00:12.787 ********* 2026-03-24 02:36:21.645875 | orchestrator | 2026-03-24 02:36:21.645886 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 02:36:21.645897 | orchestrator | Tuesday 24 March 2026 02:36:01 +0000 (0:00:00.236) 0:00:13.024 ********* 2026-03-24 02:36:21.645907 | orchestrator | 2026-03-24 02:36:21.645918 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 02:36:21.645939 | orchestrator | Tuesday 24 March 2026 02:36:01 +0000 (0:00:00.115) 0:00:13.139 ********* 2026-03-24 02:36:21.645950 | orchestrator | 2026-03-24 02:36:21.645960 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 02:36:21.645971 | orchestrator | Tuesday 24 March 2026 02:36:01 +0000 (0:00:00.115) 0:00:13.255 ********* 2026-03-24 02:36:21.645982 | orchestrator | 2026-03-24 02:36:21.645992 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 02:36:21.646003 | orchestrator | Tuesday 24 March 2026 02:36:02 +0000 (0:00:00.115) 0:00:13.370 ********* 2026-03-24 02:36:21.646013 | orchestrator | 2026-03-24 02:36:21.646091 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 02:36:21.646104 | orchestrator | Tuesday 24 March 2026 02:36:02 +0000 (0:00:00.114) 0:00:13.485 ********* 2026-03-24 02:36:21.646116 | orchestrator | 2026-03-24 02:36:21.646129 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-24 02:36:21.646141 | orchestrator | Tuesday 24 March 2026 02:36:02 +0000 (0:00:00.114) 0:00:13.600 ********* 2026-03-24 02:36:21.646153 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:36:21.646166 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:36:21.646178 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:36:21.646191 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:36:21.646203 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:36:21.646215 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:36:21.646226 | orchestrator | 2026-03-24 02:36:21.646239 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-24 02:36:21.646252 | orchestrator | Tuesday 24 March 2026 02:36:10 +0000 (0:00:08.542) 0:00:22.142 ********* 2026-03-24 02:36:21.646264 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:36:21.646277 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:36:21.646317 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:36:21.646330 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:36:21.646349 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:36:21.646361 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:36:21.646372 | orchestrator | 2026-03-24 02:36:21.646383 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-24 02:36:21.646394 | orchestrator | Tuesday 24 March 2026 02:36:11 +0000 (0:00:01.087) 0:00:23.230 ********* 2026-03-24 02:36:21.646405 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:36:21.646416 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:36:21.646427 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:36:21.646437 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:36:21.646448 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:36:21.646458 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:36:21.646469 | orchestrator | 2026-03-24 02:36:21.646480 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-24 02:36:21.646490 | orchestrator | Tuesday 24 March 2026 02:36:15 +0000 (0:00:03.119) 0:00:26.349 ********* 2026-03-24 02:36:21.646501 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-24 02:36:21.646513 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-24 02:36:21.646524 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-24 02:36:21.646534 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-24 02:36:21.646545 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-24 02:36:21.646555 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-24 02:36:21.646566 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-24 02:36:21.646586 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-24 02:36:34.452646 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-24 02:36:34.452776 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-24 02:36:34.452799 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-24 02:36:34.452818 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-24 02:36:34.452836 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 02:36:34.452853 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 02:36:34.452870 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 02:36:34.452886 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 02:36:34.452903 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 02:36:34.452913 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 02:36:34.452924 | orchestrator | 2026-03-24 02:36:34.452935 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-24 02:36:34.452946 | orchestrator | Tuesday 24 March 2026 02:36:21 +0000 (0:00:06.572) 0:00:32.922 ********* 2026-03-24 02:36:34.452957 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-24 02:36:34.452968 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:36:34.452979 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-24 02:36:34.452988 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:36:34.452998 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-24 02:36:34.453007 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:36:34.453017 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-24 02:36:34.453027 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-24 02:36:34.453037 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-24 02:36:34.453046 | orchestrator | 2026-03-24 02:36:34.453057 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-24 02:36:34.453066 | orchestrator | Tuesday 24 March 2026 02:36:24 +0000 (0:00:02.507) 0:00:35.429 ********* 2026-03-24 02:36:34.453076 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-24 02:36:34.453086 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:36:34.453096 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-24 02:36:34.453106 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:36:34.453115 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-24 02:36:34.453125 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:36:34.453134 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-24 02:36:34.453144 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-24 02:36:34.453154 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-24 02:36:34.453163 | orchestrator | 2026-03-24 02:36:34.453190 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-24 02:36:34.453203 | orchestrator | Tuesday 24 March 2026 02:36:27 +0000 (0:00:02.985) 0:00:38.415 ********* 2026-03-24 02:36:34.453214 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:36:34.453225 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:36:34.453237 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:36:34.453248 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:36:34.453282 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:36:34.453293 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:36:34.453342 | orchestrator | 2026-03-24 02:36:34.453357 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:36:34.453370 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 02:36:34.453383 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 02:36:34.453393 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 02:36:34.453405 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 02:36:34.453416 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 02:36:34.453427 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 02:36:34.453438 | orchestrator | 2026-03-24 02:36:34.453456 | orchestrator | 2026-03-24 02:36:34.453472 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:36:34.453489 | orchestrator | Tuesday 24 March 2026 02:36:34 +0000 (0:00:07.002) 0:00:45.417 ********* 2026-03-24 02:36:34.453526 | orchestrator | =============================================================================== 2026-03-24 02:36:34.453545 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.12s 2026-03-24 02:36:34.453563 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.54s 2026-03-24 02:36:34.453581 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.57s 2026-03-24 02:36:34.453597 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.99s 2026-03-24 02:36:34.453613 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.51s 2026-03-24 02:36:34.453623 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.26s 2026-03-24 02:36:34.453633 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.69s 2026-03-24 02:36:34.453642 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.43s 2026-03-24 02:36:34.453652 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.30s 2026-03-24 02:36:34.453661 | orchestrator | module-load : Load modules ---------------------------------------------- 1.14s 2026-03-24 02:36:34.453671 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.09s 2026-03-24 02:36:34.453680 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.07s 2026-03-24 02:36:34.453690 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.05s 2026-03-24 02:36:34.453699 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.81s 2026-03-24 02:36:34.453709 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.78s 2026-03-24 02:36:34.453718 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.66s 2026-03-24 02:36:34.453728 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2026-03-24 02:36:34.453737 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-03-24 02:36:36.666502 | orchestrator | 2026-03-24 02:36:36 | INFO  | Task 37541649-ef23-4c49-8c27-fe1af77d79a4 (ovn) was prepared for execution. 2026-03-24 02:36:36.666582 | orchestrator | 2026-03-24 02:36:36 | INFO  | It takes a moment until task 37541649-ef23-4c49-8c27-fe1af77d79a4 (ovn) has been started and output is visible here. 2026-03-24 02:36:46.848830 | orchestrator | 2026-03-24 02:36:46.848944 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:36:46.848962 | orchestrator | 2026-03-24 02:36:46.848975 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:36:46.848987 | orchestrator | Tuesday 24 March 2026 02:36:40 +0000 (0:00:00.167) 0:00:00.167 ********* 2026-03-24 02:36:46.848998 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:36:46.849010 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:36:46.849021 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:36:46.849063 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:36:46.849075 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:36:46.849087 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:36:46.849098 | orchestrator | 2026-03-24 02:36:46.849109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:36:46.849120 | orchestrator | Tuesday 24 March 2026 02:36:41 +0000 (0:00:00.667) 0:00:00.834 ********* 2026-03-24 02:36:46.849132 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-24 02:36:46.849143 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-24 02:36:46.849171 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-24 02:36:46.849184 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-24 02:36:46.849195 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-24 02:36:46.849205 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-24 02:36:46.849216 | orchestrator | 2026-03-24 02:36:46.849227 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-24 02:36:46.849239 | orchestrator | 2026-03-24 02:36:46.849250 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-24 02:36:46.849261 | orchestrator | Tuesday 24 March 2026 02:36:42 +0000 (0:00:00.779) 0:00:01.613 ********* 2026-03-24 02:36:46.849273 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:36:46.849285 | orchestrator | 2026-03-24 02:36:46.849296 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-24 02:36:46.849308 | orchestrator | Tuesday 24 March 2026 02:36:43 +0000 (0:00:01.011) 0:00:02.625 ********* 2026-03-24 02:36:46.849400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849589 | orchestrator | 2026-03-24 02:36:46.849604 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-24 02:36:46.849615 | orchestrator | Tuesday 24 March 2026 02:36:44 +0000 (0:00:01.185) 0:00:03.810 ********* 2026-03-24 02:36:46.849626 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849710 | orchestrator | 2026-03-24 02:36:46.849722 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-24 02:36:46.849733 | orchestrator | Tuesday 24 March 2026 02:36:45 +0000 (0:00:01.487) 0:00:05.298 ********* 2026-03-24 02:36:46.849744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:36:46.849775 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892195 | orchestrator | 2026-03-24 02:37:11.892213 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-24 02:37:11.892228 | orchestrator | Tuesday 24 March 2026 02:36:46 +0000 (0:00:01.125) 0:00:06.423 ********* 2026-03-24 02:37:11.892244 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892460 | orchestrator | 2026-03-24 02:37:11.892476 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-24 02:37:11.892492 | orchestrator | Tuesday 24 March 2026 02:36:48 +0000 (0:00:01.468) 0:00:07.892 ********* 2026-03-24 02:37:11.892516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:37:11.892610 | orchestrator | 2026-03-24 02:37:11.892620 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-24 02:37:11.892631 | orchestrator | Tuesday 24 March 2026 02:36:49 +0000 (0:00:01.373) 0:00:09.266 ********* 2026-03-24 02:37:11.892641 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:37:11.892652 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:37:11.892662 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:37:11.892671 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:37:11.892680 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:37:11.892690 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:37:11.892699 | orchestrator | 2026-03-24 02:37:11.892709 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-24 02:37:11.892718 | orchestrator | Tuesday 24 March 2026 02:36:52 +0000 (0:00:02.639) 0:00:11.906 ********* 2026-03-24 02:37:11.892728 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-24 02:37:11.892739 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-24 02:37:11.892748 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-24 02:37:11.892757 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-24 02:37:11.892767 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-24 02:37:11.892777 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-24 02:37:11.892794 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 02:37:50.142061 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 02:37:50.142175 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 02:37:50.142194 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 02:37:50.142207 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 02:37:50.142235 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 02:37:50.142247 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-24 02:37:50.142260 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-24 02:37:50.142272 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-24 02:37:50.142299 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-24 02:37:50.142305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-24 02:37:50.142311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-24 02:37:50.142319 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 02:37:50.142326 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 02:37:50.142333 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 02:37:50.142339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 02:37:50.142345 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 02:37:50.142352 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 02:37:50.142358 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 02:37:50.142365 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 02:37:50.142371 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 02:37:50.142424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 02:37:50.142437 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 02:37:50.142447 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 02:37:50.142457 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 02:37:50.142468 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 02:37:50.142478 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 02:37:50.142489 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 02:37:50.142500 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 02:37:50.142510 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 02:37:50.142520 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-24 02:37:50.142531 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-24 02:37:50.142541 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-24 02:37:50.142551 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-24 02:37:50.142562 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-24 02:37:50.142573 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-24 02:37:50.142584 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-24 02:37:50.142614 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-24 02:37:50.142633 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-24 02:37:50.142644 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-24 02:37:50.142661 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-24 02:37:50.142672 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-24 02:37:50.142683 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-24 02:37:50.142694 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-24 02:37:50.142704 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-24 02:37:50.142715 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-24 02:37:50.142726 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-24 02:37:50.142736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-24 02:37:50.142747 | orchestrator | 2026-03-24 02:37:50.142759 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 02:37:50.142769 | orchestrator | Tuesday 24 March 2026 02:37:11 +0000 (0:00:19.001) 0:00:30.907 ********* 2026-03-24 02:37:50.142779 | orchestrator | 2026-03-24 02:37:50.142790 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 02:37:50.142799 | orchestrator | Tuesday 24 March 2026 02:37:11 +0000 (0:00:00.241) 0:00:31.149 ********* 2026-03-24 02:37:50.142808 | orchestrator | 2026-03-24 02:37:50.142817 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 02:37:50.142828 | orchestrator | Tuesday 24 March 2026 02:37:11 +0000 (0:00:00.062) 0:00:31.211 ********* 2026-03-24 02:37:50.142837 | orchestrator | 2026-03-24 02:37:50.142847 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 02:37:50.142856 | orchestrator | Tuesday 24 March 2026 02:37:11 +0000 (0:00:00.061) 0:00:31.272 ********* 2026-03-24 02:37:50.142865 | orchestrator | 2026-03-24 02:37:50.142875 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 02:37:50.142884 | orchestrator | Tuesday 24 March 2026 02:37:11 +0000 (0:00:00.065) 0:00:31.338 ********* 2026-03-24 02:37:50.142895 | orchestrator | 2026-03-24 02:37:50.142905 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 02:37:50.142914 | orchestrator | Tuesday 24 March 2026 02:37:11 +0000 (0:00:00.067) 0:00:31.406 ********* 2026-03-24 02:37:50.142924 | orchestrator | 2026-03-24 02:37:50.142935 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-24 02:37:50.142945 | orchestrator | Tuesday 24 March 2026 02:37:11 +0000 (0:00:00.061) 0:00:31.467 ********* 2026-03-24 02:37:50.142954 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:37:50.142966 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:37:50.142976 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:37:50.142986 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:37:50.142996 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:37:50.143006 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:37:50.143016 | orchestrator | 2026-03-24 02:37:50.143025 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-24 02:37:50.143034 | orchestrator | Tuesday 24 March 2026 02:37:13 +0000 (0:00:01.603) 0:00:33.071 ********* 2026-03-24 02:37:50.143044 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:37:50.143055 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:37:50.143073 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:37:50.143083 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:37:50.143093 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:37:50.143104 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:37:50.143114 | orchestrator | 2026-03-24 02:37:50.143125 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-24 02:37:50.143135 | orchestrator | 2026-03-24 02:37:50.143145 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-24 02:37:50.143155 | orchestrator | Tuesday 24 March 2026 02:37:48 +0000 (0:00:34.591) 0:01:07.662 ********* 2026-03-24 02:37:50.143165 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:37:50.143175 | orchestrator | 2026-03-24 02:37:50.143185 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-24 02:37:50.143195 | orchestrator | Tuesday 24 March 2026 02:37:48 +0000 (0:00:00.625) 0:01:08.288 ********* 2026-03-24 02:37:50.143205 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:37:50.143214 | orchestrator | 2026-03-24 02:37:50.143223 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-24 02:37:50.143232 | orchestrator | Tuesday 24 March 2026 02:37:49 +0000 (0:00:00.497) 0:01:08.786 ********* 2026-03-24 02:37:50.143242 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:37:50.143253 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:37:50.143263 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:37:50.143272 | orchestrator | 2026-03-24 02:37:50.143282 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-24 02:37:50.143302 | orchestrator | Tuesday 24 March 2026 02:37:50 +0000 (0:00:00.926) 0:01:09.712 ********* 2026-03-24 02:38:00.218814 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:00.218904 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:00.218915 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:00.218923 | orchestrator | 2026-03-24 02:38:00.218932 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-24 02:38:00.218941 | orchestrator | Tuesday 24 March 2026 02:37:50 +0000 (0:00:00.317) 0:01:10.030 ********* 2026-03-24 02:38:00.218949 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:00.218980 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:00.218992 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:00.219004 | orchestrator | 2026-03-24 02:38:00.219017 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-24 02:38:00.219028 | orchestrator | Tuesday 24 March 2026 02:37:50 +0000 (0:00:00.307) 0:01:10.338 ********* 2026-03-24 02:38:00.219039 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:00.219052 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:00.219064 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:00.219076 | orchestrator | 2026-03-24 02:38:00.219089 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-24 02:38:00.219102 | orchestrator | Tuesday 24 March 2026 02:37:51 +0000 (0:00:00.296) 0:01:10.635 ********* 2026-03-24 02:38:00.219115 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:00.219127 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:00.219140 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:00.219153 | orchestrator | 2026-03-24 02:38:00.219166 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-24 02:38:00.219178 | orchestrator | Tuesday 24 March 2026 02:37:51 +0000 (0:00:00.443) 0:01:11.079 ********* 2026-03-24 02:38:00.219186 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219195 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219202 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219209 | orchestrator | 2026-03-24 02:38:00.219216 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-24 02:38:00.219224 | orchestrator | Tuesday 24 March 2026 02:37:51 +0000 (0:00:00.276) 0:01:11.355 ********* 2026-03-24 02:38:00.219252 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219259 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219267 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219274 | orchestrator | 2026-03-24 02:38:00.219281 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-24 02:38:00.219288 | orchestrator | Tuesday 24 March 2026 02:37:52 +0000 (0:00:00.268) 0:01:11.624 ********* 2026-03-24 02:38:00.219295 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219302 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219309 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219317 | orchestrator | 2026-03-24 02:38:00.219324 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-24 02:38:00.219331 | orchestrator | Tuesday 24 March 2026 02:37:52 +0000 (0:00:00.269) 0:01:11.894 ********* 2026-03-24 02:38:00.219338 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219345 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219353 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219360 | orchestrator | 2026-03-24 02:38:00.219369 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-24 02:38:00.219378 | orchestrator | Tuesday 24 March 2026 02:37:52 +0000 (0:00:00.266) 0:01:12.161 ********* 2026-03-24 02:38:00.219432 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219443 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219452 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219461 | orchestrator | 2026-03-24 02:38:00.219469 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-24 02:38:00.219478 | orchestrator | Tuesday 24 March 2026 02:37:52 +0000 (0:00:00.421) 0:01:12.583 ********* 2026-03-24 02:38:00.219486 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219494 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219503 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219511 | orchestrator | 2026-03-24 02:38:00.219520 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-24 02:38:00.219528 | orchestrator | Tuesday 24 March 2026 02:37:53 +0000 (0:00:00.272) 0:01:12.855 ********* 2026-03-24 02:38:00.219537 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219545 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219553 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219561 | orchestrator | 2026-03-24 02:38:00.219570 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-24 02:38:00.219578 | orchestrator | Tuesday 24 March 2026 02:37:53 +0000 (0:00:00.268) 0:01:13.124 ********* 2026-03-24 02:38:00.219587 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219595 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219604 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219612 | orchestrator | 2026-03-24 02:38:00.219620 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-24 02:38:00.219629 | orchestrator | Tuesday 24 March 2026 02:37:53 +0000 (0:00:00.262) 0:01:13.386 ********* 2026-03-24 02:38:00.219637 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219645 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219653 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219662 | orchestrator | 2026-03-24 02:38:00.219670 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-24 02:38:00.219678 | orchestrator | Tuesday 24 March 2026 02:37:54 +0000 (0:00:00.434) 0:01:13.820 ********* 2026-03-24 02:38:00.219687 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219695 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219704 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219712 | orchestrator | 2026-03-24 02:38:00.219721 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-24 02:38:00.219729 | orchestrator | Tuesday 24 March 2026 02:37:54 +0000 (0:00:00.289) 0:01:14.110 ********* 2026-03-24 02:38:00.219736 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219749 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219757 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219764 | orchestrator | 2026-03-24 02:38:00.219771 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-24 02:38:00.219779 | orchestrator | Tuesday 24 March 2026 02:37:54 +0000 (0:00:00.266) 0:01:14.377 ********* 2026-03-24 02:38:00.219802 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219810 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219817 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219825 | orchestrator | 2026-03-24 02:38:00.219832 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-24 02:38:00.219839 | orchestrator | Tuesday 24 March 2026 02:37:55 +0000 (0:00:00.297) 0:01:14.674 ********* 2026-03-24 02:38:00.219852 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:38:00.219860 | orchestrator | 2026-03-24 02:38:00.219868 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-24 02:38:00.219875 | orchestrator | Tuesday 24 March 2026 02:37:55 +0000 (0:00:00.701) 0:01:15.376 ********* 2026-03-24 02:38:00.219882 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:00.219889 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:00.219896 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:00.219903 | orchestrator | 2026-03-24 02:38:00.219911 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-24 02:38:00.219918 | orchestrator | Tuesday 24 March 2026 02:37:56 +0000 (0:00:00.409) 0:01:15.785 ********* 2026-03-24 02:38:00.219925 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:00.219932 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:00.219939 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:00.219946 | orchestrator | 2026-03-24 02:38:00.219954 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-24 02:38:00.219961 | orchestrator | Tuesday 24 March 2026 02:37:56 +0000 (0:00:00.411) 0:01:16.197 ********* 2026-03-24 02:38:00.219968 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.219975 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.219982 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.219989 | orchestrator | 2026-03-24 02:38:00.219999 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-24 02:38:00.220011 | orchestrator | Tuesday 24 March 2026 02:37:56 +0000 (0:00:00.305) 0:01:16.502 ********* 2026-03-24 02:38:00.220028 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.220042 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.220053 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.220065 | orchestrator | 2026-03-24 02:38:00.220077 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-24 02:38:00.220088 | orchestrator | Tuesday 24 March 2026 02:37:57 +0000 (0:00:00.509) 0:01:17.012 ********* 2026-03-24 02:38:00.220100 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.220111 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.220121 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.220132 | orchestrator | 2026-03-24 02:38:00.220144 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-24 02:38:00.220156 | orchestrator | Tuesday 24 March 2026 02:37:57 +0000 (0:00:00.304) 0:01:17.316 ********* 2026-03-24 02:38:00.220168 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.220181 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.220193 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.220205 | orchestrator | 2026-03-24 02:38:00.220217 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-24 02:38:00.220230 | orchestrator | Tuesday 24 March 2026 02:37:58 +0000 (0:00:00.323) 0:01:17.640 ********* 2026-03-24 02:38:00.220242 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.220253 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.220276 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.220286 | orchestrator | 2026-03-24 02:38:00.220296 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-24 02:38:00.220306 | orchestrator | Tuesday 24 March 2026 02:37:58 +0000 (0:00:00.279) 0:01:17.920 ********* 2026-03-24 02:38:00.220316 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:00.220327 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:00.220337 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:00.220347 | orchestrator | 2026-03-24 02:38:00.220357 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-24 02:38:00.220368 | orchestrator | Tuesday 24 March 2026 02:37:58 +0000 (0:00:00.478) 0:01:18.398 ********* 2026-03-24 02:38:00.220382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:00.220424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:00.220438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:00.220464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290918 | orchestrator | 2026-03-24 02:38:06.290932 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-24 02:38:06.290944 | orchestrator | Tuesday 24 March 2026 02:38:00 +0000 (0:00:01.397) 0:01:19.796 ********* 2026-03-24 02:38:06.290958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.290994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291098 | orchestrator | 2026-03-24 02:38:06.291110 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-24 02:38:06.291121 | orchestrator | Tuesday 24 March 2026 02:38:03 +0000 (0:00:03.769) 0:01:23.566 ********* 2026-03-24 02:38:06.291132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:06.291202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.893548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.893630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.893654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.893661 | orchestrator | 2026-03-24 02:38:28.893667 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 02:38:28.893674 | orchestrator | Tuesday 24 March 2026 02:38:05 +0000 (0:00:02.021) 0:01:25.587 ********* 2026-03-24 02:38:28.893678 | orchestrator | 2026-03-24 02:38:28.893684 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 02:38:28.893688 | orchestrator | Tuesday 24 March 2026 02:38:06 +0000 (0:00:00.058) 0:01:25.645 ********* 2026-03-24 02:38:28.893693 | orchestrator | 2026-03-24 02:38:28.893698 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 02:38:28.893703 | orchestrator | Tuesday 24 March 2026 02:38:06 +0000 (0:00:00.161) 0:01:25.807 ********* 2026-03-24 02:38:28.893708 | orchestrator | 2026-03-24 02:38:28.893712 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-24 02:38:28.893717 | orchestrator | Tuesday 24 March 2026 02:38:06 +0000 (0:00:00.057) 0:01:25.864 ********* 2026-03-24 02:38:28.893722 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:38:28.893728 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:38:28.893732 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:38:28.893737 | orchestrator | 2026-03-24 02:38:28.893742 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-24 02:38:28.893747 | orchestrator | Tuesday 24 March 2026 02:38:13 +0000 (0:00:07.347) 0:01:33.212 ********* 2026-03-24 02:38:28.893752 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:38:28.893757 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:38:28.893761 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:38:28.893766 | orchestrator | 2026-03-24 02:38:28.893771 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-24 02:38:28.893776 | orchestrator | Tuesday 24 March 2026 02:38:16 +0000 (0:00:02.378) 0:01:35.590 ********* 2026-03-24 02:38:28.893780 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:38:28.893785 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:38:28.893790 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:38:28.893794 | orchestrator | 2026-03-24 02:38:28.893799 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-24 02:38:28.893804 | orchestrator | Tuesday 24 March 2026 02:38:22 +0000 (0:00:06.507) 0:01:42.098 ********* 2026-03-24 02:38:28.893809 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:28.893813 | orchestrator | 2026-03-24 02:38:28.893818 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-24 02:38:28.893823 | orchestrator | Tuesday 24 March 2026 02:38:22 +0000 (0:00:00.117) 0:01:42.215 ********* 2026-03-24 02:38:28.893828 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:28.893833 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:28.893838 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:28.893843 | orchestrator | 2026-03-24 02:38:28.893848 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-24 02:38:28.893853 | orchestrator | Tuesday 24 March 2026 02:38:23 +0000 (0:00:00.891) 0:01:43.106 ********* 2026-03-24 02:38:28.893857 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:28.893862 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:28.893867 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:38:28.893876 | orchestrator | 2026-03-24 02:38:28.893881 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-24 02:38:28.893885 | orchestrator | Tuesday 24 March 2026 02:38:24 +0000 (0:00:00.609) 0:01:43.716 ********* 2026-03-24 02:38:28.893890 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:28.893895 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:28.893900 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:28.893904 | orchestrator | 2026-03-24 02:38:28.893909 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-24 02:38:28.893914 | orchestrator | Tuesday 24 March 2026 02:38:24 +0000 (0:00:00.749) 0:01:44.465 ********* 2026-03-24 02:38:28.893919 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:28.893923 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:28.893938 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:38:28.893942 | orchestrator | 2026-03-24 02:38:28.893947 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-24 02:38:28.893952 | orchestrator | Tuesday 24 March 2026 02:38:25 +0000 (0:00:00.573) 0:01:45.039 ********* 2026-03-24 02:38:28.893957 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:28.893961 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:28.893977 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:28.893982 | orchestrator | 2026-03-24 02:38:28.893987 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-24 02:38:28.893992 | orchestrator | Tuesday 24 March 2026 02:38:26 +0000 (0:00:01.071) 0:01:46.110 ********* 2026-03-24 02:38:28.893997 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:28.894001 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:28.894006 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:28.894011 | orchestrator | 2026-03-24 02:38:28.894052 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-24 02:38:28.894057 | orchestrator | Tuesday 24 March 2026 02:38:27 +0000 (0:00:00.713) 0:01:46.823 ********* 2026-03-24 02:38:28.894062 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:28.894067 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:28.894072 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:28.894077 | orchestrator | 2026-03-24 02:38:28.894084 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-24 02:38:28.894092 | orchestrator | Tuesday 24 March 2026 02:38:27 +0000 (0:00:00.272) 0:01:47.096 ********* 2026-03-24 02:38:28.894103 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894114 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894122 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894130 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894145 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894154 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894163 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894176 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:28.894194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.708929 | orchestrator | 2026-03-24 02:38:35.709055 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-24 02:38:35.709077 | orchestrator | Tuesday 24 March 2026 02:38:28 +0000 (0:00:01.367) 0:01:48.464 ********* 2026-03-24 02:38:35.709093 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709109 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709124 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709139 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709272 | orchestrator | 2026-03-24 02:38:35.709287 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-24 02:38:35.709302 | orchestrator | Tuesday 24 March 2026 02:38:32 +0000 (0:00:03.751) 0:01:52.215 ********* 2026-03-24 02:38:35.709337 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709352 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709367 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709382 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709546 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 02:38:35.709575 | orchestrator | 2026-03-24 02:38:35.709596 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 02:38:35.709611 | orchestrator | Tuesday 24 March 2026 02:38:35 +0000 (0:00:02.869) 0:01:55.084 ********* 2026-03-24 02:38:35.709625 | orchestrator | 2026-03-24 02:38:35.709639 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 02:38:35.709653 | orchestrator | Tuesday 24 March 2026 02:38:35 +0000 (0:00:00.059) 0:01:55.144 ********* 2026-03-24 02:38:35.709667 | orchestrator | 2026-03-24 02:38:35.709680 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 02:38:35.709695 | orchestrator | Tuesday 24 March 2026 02:38:35 +0000 (0:00:00.065) 0:01:55.209 ********* 2026-03-24 02:38:35.709709 | orchestrator | 2026-03-24 02:38:35.709732 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-24 02:38:59.515169 | orchestrator | Tuesday 24 March 2026 02:38:35 +0000 (0:00:00.064) 0:01:55.274 ********* 2026-03-24 02:38:59.515271 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:38:59.515284 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:38:59.515293 | orchestrator | 2026-03-24 02:38:59.515304 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-24 02:38:59.515312 | orchestrator | Tuesday 24 March 2026 02:38:41 +0000 (0:00:06.153) 0:02:01.427 ********* 2026-03-24 02:38:59.515320 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:38:59.515328 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:38:59.515336 | orchestrator | 2026-03-24 02:38:59.515345 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-24 02:38:59.515353 | orchestrator | Tuesday 24 March 2026 02:38:47 +0000 (0:00:06.150) 0:02:07.578 ********* 2026-03-24 02:38:59.515386 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:38:59.515394 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:38:59.515402 | orchestrator | 2026-03-24 02:38:59.515411 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-24 02:38:59.515419 | orchestrator | Tuesday 24 March 2026 02:38:54 +0000 (0:00:06.146) 0:02:13.724 ********* 2026-03-24 02:38:59.515427 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:38:59.515435 | orchestrator | 2026-03-24 02:38:59.515485 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-24 02:38:59.515496 | orchestrator | Tuesday 24 March 2026 02:38:54 +0000 (0:00:00.146) 0:02:13.871 ********* 2026-03-24 02:38:59.515504 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:59.515513 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:59.515521 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:59.515528 | orchestrator | 2026-03-24 02:38:59.515537 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-24 02:38:59.515542 | orchestrator | Tuesday 24 March 2026 02:38:55 +0000 (0:00:00.989) 0:02:14.861 ********* 2026-03-24 02:38:59.515547 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:59.515552 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:59.515557 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:38:59.515562 | orchestrator | 2026-03-24 02:38:59.515567 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-24 02:38:59.515571 | orchestrator | Tuesday 24 March 2026 02:38:55 +0000 (0:00:00.612) 0:02:15.474 ********* 2026-03-24 02:38:59.515576 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:59.515582 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:59.515586 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:59.515591 | orchestrator | 2026-03-24 02:38:59.515596 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-24 02:38:59.515601 | orchestrator | Tuesday 24 March 2026 02:38:56 +0000 (0:00:00.780) 0:02:16.254 ********* 2026-03-24 02:38:59.515605 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:38:59.515610 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:38:59.515615 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:38:59.515620 | orchestrator | 2026-03-24 02:38:59.515624 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-24 02:38:59.515629 | orchestrator | Tuesday 24 March 2026 02:38:57 +0000 (0:00:00.623) 0:02:16.878 ********* 2026-03-24 02:38:59.515634 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:59.515639 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:59.515644 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:59.515648 | orchestrator | 2026-03-24 02:38:59.515653 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-24 02:38:59.515659 | orchestrator | Tuesday 24 March 2026 02:38:58 +0000 (0:00:00.995) 0:02:17.873 ********* 2026-03-24 02:38:59.515664 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:38:59.515668 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:38:59.515673 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:38:59.515678 | orchestrator | 2026-03-24 02:38:59.515682 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:38:59.515689 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-24 02:38:59.515695 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-24 02:38:59.515700 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-24 02:38:59.515705 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:38:59.515714 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:38:59.515730 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:38:59.515738 | orchestrator | 2026-03-24 02:38:59.515747 | orchestrator | 2026-03-24 02:38:59.515754 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:38:59.515777 | orchestrator | Tuesday 24 March 2026 02:38:59 +0000 (0:00:00.890) 0:02:18.764 ********* 2026-03-24 02:38:59.515786 | orchestrator | =============================================================================== 2026-03-24 02:38:59.515794 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.59s 2026-03-24 02:38:59.515802 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.00s 2026-03-24 02:38:59.515810 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.50s 2026-03-24 02:38:59.515818 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 12.65s 2026-03-24 02:38:59.515826 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.53s 2026-03-24 02:38:59.515851 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.77s 2026-03-24 02:38:59.515861 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.75s 2026-03-24 02:38:59.515870 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.87s 2026-03-24 02:38:59.515878 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.64s 2026-03-24 02:38:59.515886 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.02s 2026-03-24 02:38:59.515895 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.60s 2026-03-24 02:38:59.515903 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.49s 2026-03-24 02:38:59.515911 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.47s 2026-03-24 02:38:59.515918 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2026-03-24 02:38:59.515927 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.37s 2026-03-24 02:38:59.515935 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.37s 2026-03-24 02:38:59.515944 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-03-24 02:38:59.515952 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.13s 2026-03-24 02:38:59.515961 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.07s 2026-03-24 02:38:59.515970 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.01s 2026-03-24 02:38:59.766358 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-24 02:38:59.766436 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-24 02:39:01.795874 | orchestrator | 2026-03-24 02:39:01 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-24 02:39:11.888835 | orchestrator | 2026-03-24 02:39:11 | INFO  | Task f296f7f8-f6ac-4e9d-90a3-d2f3ecb6a24a (wipe-partitions) was prepared for execution. 2026-03-24 02:39:11.888952 | orchestrator | 2026-03-24 02:39:11 | INFO  | It takes a moment until task f296f7f8-f6ac-4e9d-90a3-d2f3ecb6a24a (wipe-partitions) has been started and output is visible here. 2026-03-24 02:39:25.288404 | orchestrator | 2026-03-24 02:39:25.288542 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-24 02:39:25.288562 | orchestrator | 2026-03-24 02:39:25.288574 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-24 02:39:25.288584 | orchestrator | Tuesday 24 March 2026 02:39:15 +0000 (0:00:00.094) 0:00:00.094 ********* 2026-03-24 02:39:25.288590 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:39:25.288598 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:39:25.288625 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:39:25.288631 | orchestrator | 2026-03-24 02:39:25.288637 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-24 02:39:25.288643 | orchestrator | Tuesday 24 March 2026 02:39:16 +0000 (0:00:00.541) 0:00:00.635 ********* 2026-03-24 02:39:25.288649 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:39:25.288655 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:39:25.288660 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:39:25.288666 | orchestrator | 2026-03-24 02:39:25.288672 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-24 02:39:25.288678 | orchestrator | Tuesday 24 March 2026 02:39:16 +0000 (0:00:00.276) 0:00:00.912 ********* 2026-03-24 02:39:25.288684 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:39:25.288690 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:39:25.288696 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:39:25.288701 | orchestrator | 2026-03-24 02:39:25.288707 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-24 02:39:25.288713 | orchestrator | Tuesday 24 March 2026 02:39:16 +0000 (0:00:00.540) 0:00:01.453 ********* 2026-03-24 02:39:25.288718 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:39:25.288724 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:39:25.288729 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:39:25.288736 | orchestrator | 2026-03-24 02:39:25.288742 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-24 02:39:25.288747 | orchestrator | Tuesday 24 March 2026 02:39:17 +0000 (0:00:00.213) 0:00:01.667 ********* 2026-03-24 02:39:25.288753 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-24 02:39:25.288759 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-24 02:39:25.288765 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-24 02:39:25.288771 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-24 02:39:25.288776 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-24 02:39:25.288782 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-24 02:39:25.288787 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-24 02:39:25.288793 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-24 02:39:25.288810 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-24 02:39:25.288816 | orchestrator | 2026-03-24 02:39:25.288822 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-24 02:39:25.288828 | orchestrator | Tuesday 24 March 2026 02:39:19 +0000 (0:00:02.045) 0:00:03.712 ********* 2026-03-24 02:39:25.288833 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-24 02:39:25.288839 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-24 02:39:25.288845 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-24 02:39:25.288850 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-24 02:39:25.288856 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-24 02:39:25.288862 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-24 02:39:25.288867 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-24 02:39:25.288873 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-24 02:39:25.288878 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-24 02:39:25.288884 | orchestrator | 2026-03-24 02:39:25.288889 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-24 02:39:25.288895 | orchestrator | Tuesday 24 March 2026 02:39:20 +0000 (0:00:01.465) 0:00:05.178 ********* 2026-03-24 02:39:25.288901 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-24 02:39:25.288907 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-24 02:39:25.288912 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-24 02:39:25.288918 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-24 02:39:25.288927 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-24 02:39:25.288936 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-24 02:39:25.288953 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-24 02:39:25.288975 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-24 02:39:25.288985 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-24 02:39:25.288995 | orchestrator | 2026-03-24 02:39:25.289001 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-24 02:39:25.289006 | orchestrator | Tuesday 24 March 2026 02:39:23 +0000 (0:00:03.118) 0:00:08.297 ********* 2026-03-24 02:39:25.289012 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:39:25.289018 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:39:25.289023 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:39:25.289029 | orchestrator | 2026-03-24 02:39:25.289035 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-24 02:39:25.289041 | orchestrator | Tuesday 24 March 2026 02:39:24 +0000 (0:00:00.624) 0:00:08.922 ********* 2026-03-24 02:39:25.289046 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:39:25.289052 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:39:25.289058 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:39:25.289063 | orchestrator | 2026-03-24 02:39:25.289069 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:39:25.289076 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:25.289083 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:25.289105 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:25.289114 | orchestrator | 2026-03-24 02:39:25.289124 | orchestrator | 2026-03-24 02:39:25.289134 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:39:25.289144 | orchestrator | Tuesday 24 March 2026 02:39:24 +0000 (0:00:00.656) 0:00:09.578 ********* 2026-03-24 02:39:25.289151 | orchestrator | =============================================================================== 2026-03-24 02:39:25.289157 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.12s 2026-03-24 02:39:25.289165 | orchestrator | Check device availability ----------------------------------------------- 2.05s 2026-03-24 02:39:25.289174 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.47s 2026-03-24 02:39:25.289183 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-03-24 02:39:25.289204 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2026-03-24 02:39:25.289214 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2026-03-24 02:39:25.289224 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.54s 2026-03-24 02:39:25.289230 | orchestrator | Remove all rook related logical devices --------------------------------- 0.28s 2026-03-24 02:39:25.289236 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2026-03-24 02:39:37.541911 | orchestrator | 2026-03-24 02:39:37 | INFO  | Task bad82a13-ef3f-480b-b39e-02da1524ac04 (facts) was prepared for execution. 2026-03-24 02:39:37.541999 | orchestrator | 2026-03-24 02:39:37 | INFO  | It takes a moment until task bad82a13-ef3f-480b-b39e-02da1524ac04 (facts) has been started and output is visible here. 2026-03-24 02:39:50.799924 | orchestrator | 2026-03-24 02:39:50.800016 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-24 02:39:50.800028 | orchestrator | 2026-03-24 02:39:50.800036 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-24 02:39:50.800044 | orchestrator | Tuesday 24 March 2026 02:39:41 +0000 (0:00:00.259) 0:00:00.259 ********* 2026-03-24 02:39:50.800052 | orchestrator | ok: [testbed-manager] 2026-03-24 02:39:50.800085 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:39:50.800093 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:39:50.800100 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:39:50.800108 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:39:50.800115 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:39:50.800122 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:39:50.800129 | orchestrator | 2026-03-24 02:39:50.800137 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-24 02:39:50.800144 | orchestrator | Tuesday 24 March 2026 02:39:42 +0000 (0:00:01.081) 0:00:01.340 ********* 2026-03-24 02:39:50.800152 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:39:50.800160 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:39:50.800168 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:39:50.800175 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:39:50.800182 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:39:50.800189 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:39:50.800196 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:39:50.800203 | orchestrator | 2026-03-24 02:39:50.800211 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-24 02:39:50.800218 | orchestrator | 2026-03-24 02:39:50.800225 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 02:39:50.800232 | orchestrator | Tuesday 24 March 2026 02:39:43 +0000 (0:00:01.212) 0:00:02.553 ********* 2026-03-24 02:39:50.800239 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:39:50.800247 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:39:50.800254 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:39:50.800261 | orchestrator | ok: [testbed-manager] 2026-03-24 02:39:50.800268 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:39:50.800275 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:39:50.800282 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:39:50.800290 | orchestrator | 2026-03-24 02:39:50.800297 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-24 02:39:50.800304 | orchestrator | 2026-03-24 02:39:50.800311 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-24 02:39:50.800319 | orchestrator | Tuesday 24 March 2026 02:39:49 +0000 (0:00:05.937) 0:00:08.490 ********* 2026-03-24 02:39:50.800326 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:39:50.800333 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:39:50.800340 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:39:50.800348 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:39:50.800355 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:39:50.800362 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:39:50.800369 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:39:50.800376 | orchestrator | 2026-03-24 02:39:50.800383 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:39:50.800391 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:50.800462 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:50.800475 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:50.800484 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:50.800522 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:50.800535 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:50.800547 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:39:50.800569 | orchestrator | 2026-03-24 02:39:50.800657 | orchestrator | 2026-03-24 02:39:50.800669 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:39:50.800678 | orchestrator | Tuesday 24 March 2026 02:39:50 +0000 (0:00:00.509) 0:00:08.999 ********* 2026-03-24 02:39:50.800686 | orchestrator | =============================================================================== 2026-03-24 02:39:50.800694 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.94s 2026-03-24 02:39:50.800702 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2026-03-24 02:39:50.800710 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2026-03-24 02:39:50.800719 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2026-03-24 02:39:52.971696 | orchestrator | 2026-03-24 02:39:52 | INFO  | Task ac8e1120-53ef-4f58-b592-d8664f9506a6 (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-24 02:39:52.971785 | orchestrator | 2026-03-24 02:39:52 | INFO  | It takes a moment until task ac8e1120-53ef-4f58-b592-d8664f9506a6 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-24 02:40:03.393628 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-24 02:40:03.393770 | orchestrator | 2.16.14 2026-03-24 02:40:03.393790 | orchestrator | 2026-03-24 02:40:03.393818 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-24 02:40:03.393842 | orchestrator | 2026-03-24 02:40:03.393866 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-24 02:40:03.393880 | orchestrator | Tuesday 24 March 2026 02:39:56 +0000 (0:00:00.302) 0:00:00.302 ********* 2026-03-24 02:40:03.393893 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 02:40:03.393905 | orchestrator | 2026-03-24 02:40:03.393917 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-24 02:40:03.393945 | orchestrator | Tuesday 24 March 2026 02:39:57 +0000 (0:00:00.230) 0:00:00.533 ********* 2026-03-24 02:40:03.393958 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:40:03.393970 | orchestrator | 2026-03-24 02:40:03.393984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.393998 | orchestrator | Tuesday 24 March 2026 02:39:57 +0000 (0:00:00.210) 0:00:00.743 ********* 2026-03-24 02:40:03.394010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-24 02:40:03.394067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-24 02:40:03.394080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-24 02:40:03.394092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-24 02:40:03.394103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-24 02:40:03.394115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-24 02:40:03.394126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-24 02:40:03.394139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-24 02:40:03.394149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-24 02:40:03.394160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-24 02:40:03.394172 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-24 02:40:03.394184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-24 02:40:03.394196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-24 02:40:03.394236 | orchestrator | 2026-03-24 02:40:03.394251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394261 | orchestrator | Tuesday 24 March 2026 02:39:57 +0000 (0:00:00.400) 0:00:01.143 ********* 2026-03-24 02:40:03.394272 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394283 | orchestrator | 2026-03-24 02:40:03.394295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394306 | orchestrator | Tuesday 24 March 2026 02:39:58 +0000 (0:00:00.179) 0:00:01.323 ********* 2026-03-24 02:40:03.394316 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394327 | orchestrator | 2026-03-24 02:40:03.394339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394351 | orchestrator | Tuesday 24 March 2026 02:39:58 +0000 (0:00:00.198) 0:00:01.521 ********* 2026-03-24 02:40:03.394363 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394375 | orchestrator | 2026-03-24 02:40:03.394387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394400 | orchestrator | Tuesday 24 March 2026 02:39:58 +0000 (0:00:00.184) 0:00:01.706 ********* 2026-03-24 02:40:03.394411 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394422 | orchestrator | 2026-03-24 02:40:03.394434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394448 | orchestrator | Tuesday 24 March 2026 02:39:58 +0000 (0:00:00.177) 0:00:01.884 ********* 2026-03-24 02:40:03.394459 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394471 | orchestrator | 2026-03-24 02:40:03.394483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394494 | orchestrator | Tuesday 24 March 2026 02:39:58 +0000 (0:00:00.187) 0:00:02.071 ********* 2026-03-24 02:40:03.394531 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394543 | orchestrator | 2026-03-24 02:40:03.394555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394566 | orchestrator | Tuesday 24 March 2026 02:39:58 +0000 (0:00:00.190) 0:00:02.262 ********* 2026-03-24 02:40:03.394577 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394587 | orchestrator | 2026-03-24 02:40:03.394598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394609 | orchestrator | Tuesday 24 March 2026 02:39:59 +0000 (0:00:00.199) 0:00:02.461 ********* 2026-03-24 02:40:03.394620 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.394632 | orchestrator | 2026-03-24 02:40:03.394657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394668 | orchestrator | Tuesday 24 March 2026 02:39:59 +0000 (0:00:00.176) 0:00:02.638 ********* 2026-03-24 02:40:03.394679 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8) 2026-03-24 02:40:03.394693 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8) 2026-03-24 02:40:03.394705 | orchestrator | 2026-03-24 02:40:03.394718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394753 | orchestrator | Tuesday 24 March 2026 02:39:59 +0000 (0:00:00.370) 0:00:03.009 ********* 2026-03-24 02:40:03.394767 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d) 2026-03-24 02:40:03.394778 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d) 2026-03-24 02:40:03.394789 | orchestrator | 2026-03-24 02:40:03.394800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394810 | orchestrator | Tuesday 24 March 2026 02:40:00 +0000 (0:00:00.521) 0:00:03.531 ********* 2026-03-24 02:40:03.394821 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b) 2026-03-24 02:40:03.394843 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b) 2026-03-24 02:40:03.394866 | orchestrator | 2026-03-24 02:40:03.394876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394887 | orchestrator | Tuesday 24 March 2026 02:40:00 +0000 (0:00:00.540) 0:00:04.071 ********* 2026-03-24 02:40:03.394899 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f) 2026-03-24 02:40:03.394910 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f) 2026-03-24 02:40:03.394922 | orchestrator | 2026-03-24 02:40:03.394934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:03.394946 | orchestrator | Tuesday 24 March 2026 02:40:01 +0000 (0:00:00.676) 0:00:04.748 ********* 2026-03-24 02:40:03.394957 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-24 02:40:03.394968 | orchestrator | 2026-03-24 02:40:03.394979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.394991 | orchestrator | Tuesday 24 March 2026 02:40:01 +0000 (0:00:00.288) 0:00:05.037 ********* 2026-03-24 02:40:03.395002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-24 02:40:03.395014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-24 02:40:03.395026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-24 02:40:03.395037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-24 02:40:03.395048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-24 02:40:03.395060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-24 02:40:03.395072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-24 02:40:03.395084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-24 02:40:03.395096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-24 02:40:03.395108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-24 02:40:03.395119 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-24 02:40:03.395131 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-24 02:40:03.395143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-24 02:40:03.395156 | orchestrator | 2026-03-24 02:40:03.395168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.395181 | orchestrator | Tuesday 24 March 2026 02:40:02 +0000 (0:00:00.347) 0:00:05.384 ********* 2026-03-24 02:40:03.395192 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.395204 | orchestrator | 2026-03-24 02:40:03.395215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.395227 | orchestrator | Tuesday 24 March 2026 02:40:02 +0000 (0:00:00.189) 0:00:05.574 ********* 2026-03-24 02:40:03.395239 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.395250 | orchestrator | 2026-03-24 02:40:03.395262 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.395274 | orchestrator | Tuesday 24 March 2026 02:40:02 +0000 (0:00:00.182) 0:00:05.756 ********* 2026-03-24 02:40:03.395287 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.395298 | orchestrator | 2026-03-24 02:40:03.395308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.395319 | orchestrator | Tuesday 24 March 2026 02:40:02 +0000 (0:00:00.198) 0:00:05.955 ********* 2026-03-24 02:40:03.395330 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.395341 | orchestrator | 2026-03-24 02:40:03.395353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.395376 | orchestrator | Tuesday 24 March 2026 02:40:02 +0000 (0:00:00.176) 0:00:06.131 ********* 2026-03-24 02:40:03.395388 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.395400 | orchestrator | 2026-03-24 02:40:03.395412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.395424 | orchestrator | Tuesday 24 March 2026 02:40:03 +0000 (0:00:00.181) 0:00:06.313 ********* 2026-03-24 02:40:03.395434 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.395445 | orchestrator | 2026-03-24 02:40:03.395456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:03.395467 | orchestrator | Tuesday 24 March 2026 02:40:03 +0000 (0:00:00.186) 0:00:06.500 ********* 2026-03-24 02:40:03.395478 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:03.395489 | orchestrator | 2026-03-24 02:40:03.395553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:09.951298 | orchestrator | Tuesday 24 March 2026 02:40:03 +0000 (0:00:00.187) 0:00:06.687 ********* 2026-03-24 02:40:09.951400 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951412 | orchestrator | 2026-03-24 02:40:09.951421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:09.951428 | orchestrator | Tuesday 24 March 2026 02:40:03 +0000 (0:00:00.183) 0:00:06.871 ********* 2026-03-24 02:40:09.951436 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-24 02:40:09.951441 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-24 02:40:09.951445 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-24 02:40:09.951452 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-24 02:40:09.951459 | orchestrator | 2026-03-24 02:40:09.951479 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:09.951486 | orchestrator | Tuesday 24 March 2026 02:40:04 +0000 (0:00:00.830) 0:00:07.701 ********* 2026-03-24 02:40:09.951493 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951499 | orchestrator | 2026-03-24 02:40:09.951531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:09.951538 | orchestrator | Tuesday 24 March 2026 02:40:04 +0000 (0:00:00.186) 0:00:07.888 ********* 2026-03-24 02:40:09.951544 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951551 | orchestrator | 2026-03-24 02:40:09.951557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:09.951564 | orchestrator | Tuesday 24 March 2026 02:40:04 +0000 (0:00:00.188) 0:00:08.076 ********* 2026-03-24 02:40:09.951571 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951578 | orchestrator | 2026-03-24 02:40:09.951584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:09.951591 | orchestrator | Tuesday 24 March 2026 02:40:04 +0000 (0:00:00.207) 0:00:08.284 ********* 2026-03-24 02:40:09.951598 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951604 | orchestrator | 2026-03-24 02:40:09.951611 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-24 02:40:09.951617 | orchestrator | Tuesday 24 March 2026 02:40:05 +0000 (0:00:00.183) 0:00:08.468 ********* 2026-03-24 02:40:09.951624 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-24 02:40:09.951631 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-24 02:40:09.951637 | orchestrator | 2026-03-24 02:40:09.951644 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-24 02:40:09.951650 | orchestrator | Tuesday 24 March 2026 02:40:05 +0000 (0:00:00.150) 0:00:08.619 ********* 2026-03-24 02:40:09.951657 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951664 | orchestrator | 2026-03-24 02:40:09.951671 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-24 02:40:09.951678 | orchestrator | Tuesday 24 March 2026 02:40:05 +0000 (0:00:00.128) 0:00:08.748 ********* 2026-03-24 02:40:09.951684 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951710 | orchestrator | 2026-03-24 02:40:09.951717 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-24 02:40:09.951723 | orchestrator | Tuesday 24 March 2026 02:40:05 +0000 (0:00:00.130) 0:00:08.878 ********* 2026-03-24 02:40:09.951729 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951736 | orchestrator | 2026-03-24 02:40:09.951742 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-24 02:40:09.951749 | orchestrator | Tuesday 24 March 2026 02:40:05 +0000 (0:00:00.129) 0:00:09.008 ********* 2026-03-24 02:40:09.951756 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:40:09.951763 | orchestrator | 2026-03-24 02:40:09.951770 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-24 02:40:09.951776 | orchestrator | Tuesday 24 March 2026 02:40:05 +0000 (0:00:00.130) 0:00:09.138 ********* 2026-03-24 02:40:09.951783 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d21def1-f46f-5673-adc8-800ee07d688b'}}) 2026-03-24 02:40:09.951790 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7857bb6-ee47-5754-bddf-a4c3c3300a80'}}) 2026-03-24 02:40:09.951797 | orchestrator | 2026-03-24 02:40:09.951803 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-24 02:40:09.951810 | orchestrator | Tuesday 24 March 2026 02:40:05 +0000 (0:00:00.158) 0:00:09.296 ********* 2026-03-24 02:40:09.951817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d21def1-f46f-5673-adc8-800ee07d688b'}})  2026-03-24 02:40:09.951825 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7857bb6-ee47-5754-bddf-a4c3c3300a80'}})  2026-03-24 02:40:09.951832 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951839 | orchestrator | 2026-03-24 02:40:09.951846 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-24 02:40:09.951852 | orchestrator | Tuesday 24 March 2026 02:40:06 +0000 (0:00:00.266) 0:00:09.563 ********* 2026-03-24 02:40:09.951859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d21def1-f46f-5673-adc8-800ee07d688b'}})  2026-03-24 02:40:09.951866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7857bb6-ee47-5754-bddf-a4c3c3300a80'}})  2026-03-24 02:40:09.951873 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951880 | orchestrator | 2026-03-24 02:40:09.951887 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-24 02:40:09.951894 | orchestrator | Tuesday 24 March 2026 02:40:06 +0000 (0:00:00.138) 0:00:09.702 ********* 2026-03-24 02:40:09.951901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d21def1-f46f-5673-adc8-800ee07d688b'}})  2026-03-24 02:40:09.951922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7857bb6-ee47-5754-bddf-a4c3c3300a80'}})  2026-03-24 02:40:09.951929 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.951936 | orchestrator | 2026-03-24 02:40:09.951943 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-24 02:40:09.951950 | orchestrator | Tuesday 24 March 2026 02:40:06 +0000 (0:00:00.132) 0:00:09.835 ********* 2026-03-24 02:40:09.951957 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:40:09.951963 | orchestrator | 2026-03-24 02:40:09.951970 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-24 02:40:09.951976 | orchestrator | Tuesday 24 March 2026 02:40:06 +0000 (0:00:00.114) 0:00:09.950 ********* 2026-03-24 02:40:09.951983 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:40:09.951989 | orchestrator | 2026-03-24 02:40:09.952000 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-24 02:40:09.952008 | orchestrator | Tuesday 24 March 2026 02:40:06 +0000 (0:00:00.115) 0:00:10.065 ********* 2026-03-24 02:40:09.952014 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.952021 | orchestrator | 2026-03-24 02:40:09.952028 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-24 02:40:09.952041 | orchestrator | Tuesday 24 March 2026 02:40:06 +0000 (0:00:00.109) 0:00:10.175 ********* 2026-03-24 02:40:09.952048 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.952054 | orchestrator | 2026-03-24 02:40:09.952061 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-24 02:40:09.952067 | orchestrator | Tuesday 24 March 2026 02:40:06 +0000 (0:00:00.124) 0:00:10.299 ********* 2026-03-24 02:40:09.952074 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.952080 | orchestrator | 2026-03-24 02:40:09.952086 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-24 02:40:09.952093 | orchestrator | Tuesday 24 March 2026 02:40:07 +0000 (0:00:00.127) 0:00:10.426 ********* 2026-03-24 02:40:09.952100 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 02:40:09.952106 | orchestrator |  "ceph_osd_devices": { 2026-03-24 02:40:09.952113 | orchestrator |  "sdb": { 2026-03-24 02:40:09.952120 | orchestrator |  "osd_lvm_uuid": "4d21def1-f46f-5673-adc8-800ee07d688b" 2026-03-24 02:40:09.952127 | orchestrator |  }, 2026-03-24 02:40:09.952133 | orchestrator |  "sdc": { 2026-03-24 02:40:09.952140 | orchestrator |  "osd_lvm_uuid": "d7857bb6-ee47-5754-bddf-a4c3c3300a80" 2026-03-24 02:40:09.952146 | orchestrator |  } 2026-03-24 02:40:09.952153 | orchestrator |  } 2026-03-24 02:40:09.952159 | orchestrator | } 2026-03-24 02:40:09.952166 | orchestrator | 2026-03-24 02:40:09.952172 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-24 02:40:09.952179 | orchestrator | Tuesday 24 March 2026 02:40:07 +0000 (0:00:00.115) 0:00:10.542 ********* 2026-03-24 02:40:09.952186 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.952192 | orchestrator | 2026-03-24 02:40:09.952199 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-24 02:40:09.952206 | orchestrator | Tuesday 24 March 2026 02:40:07 +0000 (0:00:00.103) 0:00:10.646 ********* 2026-03-24 02:40:09.952212 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.952218 | orchestrator | 2026-03-24 02:40:09.952225 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-24 02:40:09.952232 | orchestrator | Tuesday 24 March 2026 02:40:07 +0000 (0:00:00.121) 0:00:10.767 ********* 2026-03-24 02:40:09.952238 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:40:09.952244 | orchestrator | 2026-03-24 02:40:09.952251 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-24 02:40:09.952257 | orchestrator | Tuesday 24 March 2026 02:40:07 +0000 (0:00:00.120) 0:00:10.887 ********* 2026-03-24 02:40:09.952264 | orchestrator | changed: [testbed-node-3] => { 2026-03-24 02:40:09.952271 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-24 02:40:09.952278 | orchestrator |  "ceph_osd_devices": { 2026-03-24 02:40:09.952284 | orchestrator |  "sdb": { 2026-03-24 02:40:09.952291 | orchestrator |  "osd_lvm_uuid": "4d21def1-f46f-5673-adc8-800ee07d688b" 2026-03-24 02:40:09.952298 | orchestrator |  }, 2026-03-24 02:40:09.952304 | orchestrator |  "sdc": { 2026-03-24 02:40:09.952311 | orchestrator |  "osd_lvm_uuid": "d7857bb6-ee47-5754-bddf-a4c3c3300a80" 2026-03-24 02:40:09.952317 | orchestrator |  } 2026-03-24 02:40:09.952324 | orchestrator |  }, 2026-03-24 02:40:09.952330 | orchestrator |  "lvm_volumes": [ 2026-03-24 02:40:09.952336 | orchestrator |  { 2026-03-24 02:40:09.952343 | orchestrator |  "data": "osd-block-4d21def1-f46f-5673-adc8-800ee07d688b", 2026-03-24 02:40:09.952350 | orchestrator |  "data_vg": "ceph-4d21def1-f46f-5673-adc8-800ee07d688b" 2026-03-24 02:40:09.952357 | orchestrator |  }, 2026-03-24 02:40:09.952364 | orchestrator |  { 2026-03-24 02:40:09.952370 | orchestrator |  "data": "osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80", 2026-03-24 02:40:09.952377 | orchestrator |  "data_vg": "ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80" 2026-03-24 02:40:09.952384 | orchestrator |  } 2026-03-24 02:40:09.952395 | orchestrator |  ] 2026-03-24 02:40:09.952401 | orchestrator |  } 2026-03-24 02:40:09.952408 | orchestrator | } 2026-03-24 02:40:09.952414 | orchestrator | 2026-03-24 02:40:09.952421 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-24 02:40:09.952428 | orchestrator | Tuesday 24 March 2026 02:40:07 +0000 (0:00:00.299) 0:00:11.186 ********* 2026-03-24 02:40:09.952435 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 02:40:09.952442 | orchestrator | 2026-03-24 02:40:09.952449 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-24 02:40:09.952456 | orchestrator | 2026-03-24 02:40:09.952462 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-24 02:40:09.952469 | orchestrator | Tuesday 24 March 2026 02:40:09 +0000 (0:00:01.609) 0:00:12.796 ********* 2026-03-24 02:40:09.952475 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-24 02:40:09.952482 | orchestrator | 2026-03-24 02:40:09.952488 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-24 02:40:09.952495 | orchestrator | Tuesday 24 March 2026 02:40:09 +0000 (0:00:00.234) 0:00:13.031 ********* 2026-03-24 02:40:09.952502 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:40:09.952540 | orchestrator | 2026-03-24 02:40:09.952551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.558994 | orchestrator | Tuesday 24 March 2026 02:40:09 +0000 (0:00:00.219) 0:00:13.250 ********* 2026-03-24 02:40:17.559084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-24 02:40:17.559093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-24 02:40:17.559100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-24 02:40:17.559107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-24 02:40:17.559128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-24 02:40:17.559136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-24 02:40:17.559142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-24 02:40:17.559149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-24 02:40:17.559155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-24 02:40:17.559161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-24 02:40:17.559167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-24 02:40:17.559173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-24 02:40:17.559179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-24 02:40:17.559185 | orchestrator | 2026-03-24 02:40:17.559192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559198 | orchestrator | Tuesday 24 March 2026 02:40:10 +0000 (0:00:00.350) 0:00:13.601 ********* 2026-03-24 02:40:17.559204 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559211 | orchestrator | 2026-03-24 02:40:17.559217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559224 | orchestrator | Tuesday 24 March 2026 02:40:10 +0000 (0:00:00.181) 0:00:13.782 ********* 2026-03-24 02:40:17.559230 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559236 | orchestrator | 2026-03-24 02:40:17.559242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559248 | orchestrator | Tuesday 24 March 2026 02:40:10 +0000 (0:00:00.185) 0:00:13.968 ********* 2026-03-24 02:40:17.559254 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559282 | orchestrator | 2026-03-24 02:40:17.559289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559294 | orchestrator | Tuesday 24 March 2026 02:40:10 +0000 (0:00:00.182) 0:00:14.151 ********* 2026-03-24 02:40:17.559300 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559306 | orchestrator | 2026-03-24 02:40:17.559311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559316 | orchestrator | Tuesday 24 March 2026 02:40:11 +0000 (0:00:00.434) 0:00:14.586 ********* 2026-03-24 02:40:17.559322 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559328 | orchestrator | 2026-03-24 02:40:17.559334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559341 | orchestrator | Tuesday 24 March 2026 02:40:11 +0000 (0:00:00.181) 0:00:14.767 ********* 2026-03-24 02:40:17.559347 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559353 | orchestrator | 2026-03-24 02:40:17.559358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559364 | orchestrator | Tuesday 24 March 2026 02:40:11 +0000 (0:00:00.184) 0:00:14.951 ********* 2026-03-24 02:40:17.559370 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559375 | orchestrator | 2026-03-24 02:40:17.559381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559387 | orchestrator | Tuesday 24 March 2026 02:40:11 +0000 (0:00:00.180) 0:00:15.132 ********* 2026-03-24 02:40:17.559393 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559398 | orchestrator | 2026-03-24 02:40:17.559404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559410 | orchestrator | Tuesday 24 March 2026 02:40:12 +0000 (0:00:00.184) 0:00:15.316 ********* 2026-03-24 02:40:17.559427 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba) 2026-03-24 02:40:17.559434 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba) 2026-03-24 02:40:17.559440 | orchestrator | 2026-03-24 02:40:17.559445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559451 | orchestrator | Tuesday 24 March 2026 02:40:12 +0000 (0:00:00.411) 0:00:15.728 ********* 2026-03-24 02:40:17.559458 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710) 2026-03-24 02:40:17.559464 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710) 2026-03-24 02:40:17.559470 | orchestrator | 2026-03-24 02:40:17.559476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559482 | orchestrator | Tuesday 24 March 2026 02:40:12 +0000 (0:00:00.403) 0:00:16.131 ********* 2026-03-24 02:40:17.559488 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c) 2026-03-24 02:40:17.559494 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c) 2026-03-24 02:40:17.559500 | orchestrator | 2026-03-24 02:40:17.559506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559611 | orchestrator | Tuesday 24 March 2026 02:40:13 +0000 (0:00:00.392) 0:00:16.524 ********* 2026-03-24 02:40:17.559619 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b) 2026-03-24 02:40:17.559626 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b) 2026-03-24 02:40:17.559632 | orchestrator | 2026-03-24 02:40:17.559639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:17.559646 | orchestrator | Tuesday 24 March 2026 02:40:13 +0000 (0:00:00.539) 0:00:17.063 ********* 2026-03-24 02:40:17.559659 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-24 02:40:17.559666 | orchestrator | 2026-03-24 02:40:17.559672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559688 | orchestrator | Tuesday 24 March 2026 02:40:14 +0000 (0:00:00.503) 0:00:17.567 ********* 2026-03-24 02:40:17.559695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-24 02:40:17.559701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-24 02:40:17.559707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-24 02:40:17.559714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-24 02:40:17.559720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-24 02:40:17.559727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-24 02:40:17.559733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-24 02:40:17.559739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-24 02:40:17.559746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-24 02:40:17.559752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-24 02:40:17.559759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-24 02:40:17.559765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-24 02:40:17.559772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-24 02:40:17.559779 | orchestrator | 2026-03-24 02:40:17.559786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559790 | orchestrator | Tuesday 24 March 2026 02:40:14 +0000 (0:00:00.621) 0:00:18.188 ********* 2026-03-24 02:40:17.559794 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559799 | orchestrator | 2026-03-24 02:40:17.559805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559811 | orchestrator | Tuesday 24 March 2026 02:40:15 +0000 (0:00:00.190) 0:00:18.378 ********* 2026-03-24 02:40:17.559817 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559823 | orchestrator | 2026-03-24 02:40:17.559829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559836 | orchestrator | Tuesday 24 March 2026 02:40:15 +0000 (0:00:00.189) 0:00:18.568 ********* 2026-03-24 02:40:17.559842 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559848 | orchestrator | 2026-03-24 02:40:17.559855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559861 | orchestrator | Tuesday 24 March 2026 02:40:15 +0000 (0:00:00.175) 0:00:18.743 ********* 2026-03-24 02:40:17.559868 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559874 | orchestrator | 2026-03-24 02:40:17.559880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559886 | orchestrator | Tuesday 24 March 2026 02:40:15 +0000 (0:00:00.192) 0:00:18.936 ********* 2026-03-24 02:40:17.559892 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559898 | orchestrator | 2026-03-24 02:40:17.559904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559910 | orchestrator | Tuesday 24 March 2026 02:40:15 +0000 (0:00:00.194) 0:00:19.130 ********* 2026-03-24 02:40:17.559916 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559922 | orchestrator | 2026-03-24 02:40:17.559928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559934 | orchestrator | Tuesday 24 March 2026 02:40:16 +0000 (0:00:00.180) 0:00:19.311 ********* 2026-03-24 02:40:17.559941 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559947 | orchestrator | 2026-03-24 02:40:17.559953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559965 | orchestrator | Tuesday 24 March 2026 02:40:16 +0000 (0:00:00.181) 0:00:19.492 ********* 2026-03-24 02:40:17.559971 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:17.559977 | orchestrator | 2026-03-24 02:40:17.559982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.559988 | orchestrator | Tuesday 24 March 2026 02:40:16 +0000 (0:00:00.177) 0:00:19.670 ********* 2026-03-24 02:40:17.559993 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-24 02:40:17.559999 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-24 02:40:17.560005 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-24 02:40:17.560012 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-24 02:40:17.560017 | orchestrator | 2026-03-24 02:40:17.560024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:17.560030 | orchestrator | Tuesday 24 March 2026 02:40:17 +0000 (0:00:00.727) 0:00:20.397 ********* 2026-03-24 02:40:17.560035 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.627773 | orchestrator | 2026-03-24 02:40:22.627879 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:22.627894 | orchestrator | Tuesday 24 March 2026 02:40:17 +0000 (0:00:00.460) 0:00:20.858 ********* 2026-03-24 02:40:22.627905 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.627916 | orchestrator | 2026-03-24 02:40:22.627926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:22.627936 | orchestrator | Tuesday 24 March 2026 02:40:17 +0000 (0:00:00.203) 0:00:21.061 ********* 2026-03-24 02:40:22.627946 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.627963 | orchestrator | 2026-03-24 02:40:22.627979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:22.628014 | orchestrator | Tuesday 24 March 2026 02:40:17 +0000 (0:00:00.202) 0:00:21.263 ********* 2026-03-24 02:40:22.628032 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628047 | orchestrator | 2026-03-24 02:40:22.628063 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-24 02:40:22.628078 | orchestrator | Tuesday 24 March 2026 02:40:18 +0000 (0:00:00.183) 0:00:21.446 ********* 2026-03-24 02:40:22.628094 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-24 02:40:22.628110 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-24 02:40:22.628127 | orchestrator | 2026-03-24 02:40:22.628143 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-24 02:40:22.628158 | orchestrator | Tuesday 24 March 2026 02:40:18 +0000 (0:00:00.144) 0:00:21.591 ********* 2026-03-24 02:40:22.628173 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628188 | orchestrator | 2026-03-24 02:40:22.628203 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-24 02:40:22.628219 | orchestrator | Tuesday 24 March 2026 02:40:18 +0000 (0:00:00.110) 0:00:21.701 ********* 2026-03-24 02:40:22.628235 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628252 | orchestrator | 2026-03-24 02:40:22.628268 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-24 02:40:22.628285 | orchestrator | Tuesday 24 March 2026 02:40:18 +0000 (0:00:00.109) 0:00:21.811 ********* 2026-03-24 02:40:22.628297 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628308 | orchestrator | 2026-03-24 02:40:22.628323 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-24 02:40:22.628341 | orchestrator | Tuesday 24 March 2026 02:40:18 +0000 (0:00:00.105) 0:00:21.917 ********* 2026-03-24 02:40:22.628357 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:40:22.628373 | orchestrator | 2026-03-24 02:40:22.628389 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-24 02:40:22.628407 | orchestrator | Tuesday 24 March 2026 02:40:18 +0000 (0:00:00.129) 0:00:22.046 ********* 2026-03-24 02:40:22.628425 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d735645-9e18-5d04-8028-1696940918c0'}}) 2026-03-24 02:40:22.628470 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a329e066-8536-5438-99e1-d9cc3f91f537'}}) 2026-03-24 02:40:22.628488 | orchestrator | 2026-03-24 02:40:22.628505 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-24 02:40:22.628547 | orchestrator | Tuesday 24 March 2026 02:40:18 +0000 (0:00:00.155) 0:00:22.202 ********* 2026-03-24 02:40:22.628566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d735645-9e18-5d04-8028-1696940918c0'}})  2026-03-24 02:40:22.628586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a329e066-8536-5438-99e1-d9cc3f91f537'}})  2026-03-24 02:40:22.628603 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628619 | orchestrator | 2026-03-24 02:40:22.628636 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-24 02:40:22.628653 | orchestrator | Tuesday 24 March 2026 02:40:19 +0000 (0:00:00.133) 0:00:22.335 ********* 2026-03-24 02:40:22.628669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d735645-9e18-5d04-8028-1696940918c0'}})  2026-03-24 02:40:22.628687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a329e066-8536-5438-99e1-d9cc3f91f537'}})  2026-03-24 02:40:22.628699 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628709 | orchestrator | 2026-03-24 02:40:22.628719 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-24 02:40:22.628728 | orchestrator | Tuesday 24 March 2026 02:40:19 +0000 (0:00:00.277) 0:00:22.613 ********* 2026-03-24 02:40:22.628738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d735645-9e18-5d04-8028-1696940918c0'}})  2026-03-24 02:40:22.628748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a329e066-8536-5438-99e1-d9cc3f91f537'}})  2026-03-24 02:40:22.628758 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628767 | orchestrator | 2026-03-24 02:40:22.628777 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-24 02:40:22.628787 | orchestrator | Tuesday 24 March 2026 02:40:19 +0000 (0:00:00.139) 0:00:22.753 ********* 2026-03-24 02:40:22.628797 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:40:22.628806 | orchestrator | 2026-03-24 02:40:22.628816 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-24 02:40:22.628825 | orchestrator | Tuesday 24 March 2026 02:40:19 +0000 (0:00:00.119) 0:00:22.873 ********* 2026-03-24 02:40:22.628835 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:40:22.628844 | orchestrator | 2026-03-24 02:40:22.628854 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-24 02:40:22.628864 | orchestrator | Tuesday 24 March 2026 02:40:19 +0000 (0:00:00.121) 0:00:22.994 ********* 2026-03-24 02:40:22.628895 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628905 | orchestrator | 2026-03-24 02:40:22.628915 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-24 02:40:22.628925 | orchestrator | Tuesday 24 March 2026 02:40:19 +0000 (0:00:00.117) 0:00:23.111 ********* 2026-03-24 02:40:22.628934 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628944 | orchestrator | 2026-03-24 02:40:22.628953 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-24 02:40:22.628963 | orchestrator | Tuesday 24 March 2026 02:40:19 +0000 (0:00:00.126) 0:00:23.238 ********* 2026-03-24 02:40:22.628973 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.628982 | orchestrator | 2026-03-24 02:40:22.628992 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-24 02:40:22.629011 | orchestrator | Tuesday 24 March 2026 02:40:20 +0000 (0:00:00.116) 0:00:23.354 ********* 2026-03-24 02:40:22.629021 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 02:40:22.629031 | orchestrator |  "ceph_osd_devices": { 2026-03-24 02:40:22.629050 | orchestrator |  "sdb": { 2026-03-24 02:40:22.629061 | orchestrator |  "osd_lvm_uuid": "4d735645-9e18-5d04-8028-1696940918c0" 2026-03-24 02:40:22.629070 | orchestrator |  }, 2026-03-24 02:40:22.629080 | orchestrator |  "sdc": { 2026-03-24 02:40:22.629090 | orchestrator |  "osd_lvm_uuid": "a329e066-8536-5438-99e1-d9cc3f91f537" 2026-03-24 02:40:22.629099 | orchestrator |  } 2026-03-24 02:40:22.629109 | orchestrator |  } 2026-03-24 02:40:22.629119 | orchestrator | } 2026-03-24 02:40:22.629129 | orchestrator | 2026-03-24 02:40:22.629138 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-24 02:40:22.629148 | orchestrator | Tuesday 24 March 2026 02:40:20 +0000 (0:00:00.136) 0:00:23.491 ********* 2026-03-24 02:40:22.629158 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.629168 | orchestrator | 2026-03-24 02:40:22.629177 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-24 02:40:22.629187 | orchestrator | Tuesday 24 March 2026 02:40:20 +0000 (0:00:00.127) 0:00:23.619 ********* 2026-03-24 02:40:22.629196 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.629206 | orchestrator | 2026-03-24 02:40:22.629216 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-24 02:40:22.629225 | orchestrator | Tuesday 24 March 2026 02:40:20 +0000 (0:00:00.110) 0:00:23.729 ********* 2026-03-24 02:40:22.629235 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:40:22.629244 | orchestrator | 2026-03-24 02:40:22.629254 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-24 02:40:22.629264 | orchestrator | Tuesday 24 March 2026 02:40:20 +0000 (0:00:00.125) 0:00:23.855 ********* 2026-03-24 02:40:22.629273 | orchestrator | changed: [testbed-node-4] => { 2026-03-24 02:40:22.629283 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-24 02:40:22.629293 | orchestrator |  "ceph_osd_devices": { 2026-03-24 02:40:22.629302 | orchestrator |  "sdb": { 2026-03-24 02:40:22.629312 | orchestrator |  "osd_lvm_uuid": "4d735645-9e18-5d04-8028-1696940918c0" 2026-03-24 02:40:22.629322 | orchestrator |  }, 2026-03-24 02:40:22.629332 | orchestrator |  "sdc": { 2026-03-24 02:40:22.629342 | orchestrator |  "osd_lvm_uuid": "a329e066-8536-5438-99e1-d9cc3f91f537" 2026-03-24 02:40:22.629351 | orchestrator |  } 2026-03-24 02:40:22.629361 | orchestrator |  }, 2026-03-24 02:40:22.629371 | orchestrator |  "lvm_volumes": [ 2026-03-24 02:40:22.629380 | orchestrator |  { 2026-03-24 02:40:22.629390 | orchestrator |  "data": "osd-block-4d735645-9e18-5d04-8028-1696940918c0", 2026-03-24 02:40:22.629400 | orchestrator |  "data_vg": "ceph-4d735645-9e18-5d04-8028-1696940918c0" 2026-03-24 02:40:22.629410 | orchestrator |  }, 2026-03-24 02:40:22.629419 | orchestrator |  { 2026-03-24 02:40:22.629429 | orchestrator |  "data": "osd-block-a329e066-8536-5438-99e1-d9cc3f91f537", 2026-03-24 02:40:22.629439 | orchestrator |  "data_vg": "ceph-a329e066-8536-5438-99e1-d9cc3f91f537" 2026-03-24 02:40:22.629449 | orchestrator |  } 2026-03-24 02:40:22.629458 | orchestrator |  ] 2026-03-24 02:40:22.629469 | orchestrator |  } 2026-03-24 02:40:22.629478 | orchestrator | } 2026-03-24 02:40:22.629488 | orchestrator | 2026-03-24 02:40:22.629498 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-24 02:40:22.629507 | orchestrator | Tuesday 24 March 2026 02:40:20 +0000 (0:00:00.313) 0:00:24.169 ********* 2026-03-24 02:40:22.629543 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-24 02:40:22.629553 | orchestrator | 2026-03-24 02:40:22.629563 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-24 02:40:22.629573 | orchestrator | 2026-03-24 02:40:22.629582 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-24 02:40:22.629592 | orchestrator | Tuesday 24 March 2026 02:40:21 +0000 (0:00:00.987) 0:00:25.157 ********* 2026-03-24 02:40:22.629601 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-24 02:40:22.629617 | orchestrator | 2026-03-24 02:40:22.629627 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-24 02:40:22.629636 | orchestrator | Tuesday 24 March 2026 02:40:22 +0000 (0:00:00.232) 0:00:25.389 ********* 2026-03-24 02:40:22.629646 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:40:22.629655 | orchestrator | 2026-03-24 02:40:22.629665 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:22.629675 | orchestrator | Tuesday 24 March 2026 02:40:22 +0000 (0:00:00.207) 0:00:25.596 ********* 2026-03-24 02:40:22.629684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-24 02:40:22.629694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-24 02:40:22.629703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-24 02:40:22.629713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-24 02:40:22.629722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-24 02:40:22.629739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-24 02:40:29.925035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-24 02:40:29.925133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-24 02:40:29.925145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-24 02:40:29.925153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-24 02:40:29.925160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-24 02:40:29.925185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-24 02:40:29.925193 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-24 02:40:29.925200 | orchestrator | 2026-03-24 02:40:29.925208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925216 | orchestrator | Tuesday 24 March 2026 02:40:22 +0000 (0:00:00.329) 0:00:25.926 ********* 2026-03-24 02:40:29.925223 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925230 | orchestrator | 2026-03-24 02:40:29.925237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925244 | orchestrator | Tuesday 24 March 2026 02:40:22 +0000 (0:00:00.181) 0:00:26.108 ********* 2026-03-24 02:40:29.925250 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925257 | orchestrator | 2026-03-24 02:40:29.925263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925269 | orchestrator | Tuesday 24 March 2026 02:40:22 +0000 (0:00:00.184) 0:00:26.293 ********* 2026-03-24 02:40:29.925276 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925283 | orchestrator | 2026-03-24 02:40:29.925291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925297 | orchestrator | Tuesday 24 March 2026 02:40:23 +0000 (0:00:00.178) 0:00:26.471 ********* 2026-03-24 02:40:29.925304 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925311 | orchestrator | 2026-03-24 02:40:29.925317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925324 | orchestrator | Tuesday 24 March 2026 02:40:23 +0000 (0:00:00.438) 0:00:26.909 ********* 2026-03-24 02:40:29.925331 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925337 | orchestrator | 2026-03-24 02:40:29.925344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925351 | orchestrator | Tuesday 24 March 2026 02:40:23 +0000 (0:00:00.201) 0:00:27.111 ********* 2026-03-24 02:40:29.925358 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925384 | orchestrator | 2026-03-24 02:40:29.925391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925398 | orchestrator | Tuesday 24 March 2026 02:40:23 +0000 (0:00:00.188) 0:00:27.299 ********* 2026-03-24 02:40:29.925404 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925412 | orchestrator | 2026-03-24 02:40:29.925418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925425 | orchestrator | Tuesday 24 March 2026 02:40:24 +0000 (0:00:00.187) 0:00:27.487 ********* 2026-03-24 02:40:29.925432 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925439 | orchestrator | 2026-03-24 02:40:29.925446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925452 | orchestrator | Tuesday 24 March 2026 02:40:24 +0000 (0:00:00.191) 0:00:27.678 ********* 2026-03-24 02:40:29.925459 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9) 2026-03-24 02:40:29.925468 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9) 2026-03-24 02:40:29.925475 | orchestrator | 2026-03-24 02:40:29.925481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925488 | orchestrator | Tuesday 24 March 2026 02:40:24 +0000 (0:00:00.372) 0:00:28.051 ********* 2026-03-24 02:40:29.925495 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5) 2026-03-24 02:40:29.925502 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5) 2026-03-24 02:40:29.925509 | orchestrator | 2026-03-24 02:40:29.925516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925552 | orchestrator | Tuesday 24 March 2026 02:40:25 +0000 (0:00:00.395) 0:00:28.447 ********* 2026-03-24 02:40:29.925559 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e) 2026-03-24 02:40:29.925566 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e) 2026-03-24 02:40:29.925572 | orchestrator | 2026-03-24 02:40:29.925579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925586 | orchestrator | Tuesday 24 March 2026 02:40:25 +0000 (0:00:00.380) 0:00:28.827 ********* 2026-03-24 02:40:29.925593 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a) 2026-03-24 02:40:29.925601 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a) 2026-03-24 02:40:29.925609 | orchestrator | 2026-03-24 02:40:29.925616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:40:29.925623 | orchestrator | Tuesday 24 March 2026 02:40:25 +0000 (0:00:00.395) 0:00:29.223 ********* 2026-03-24 02:40:29.925630 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-24 02:40:29.925637 | orchestrator | 2026-03-24 02:40:29.925644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925669 | orchestrator | Tuesday 24 March 2026 02:40:26 +0000 (0:00:00.302) 0:00:29.525 ********* 2026-03-24 02:40:29.925677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-24 02:40:29.925684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-24 02:40:29.925691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-24 02:40:29.925698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-24 02:40:29.925711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-24 02:40:29.925718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-24 02:40:29.925725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-24 02:40:29.925738 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-24 02:40:29.925745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-24 02:40:29.925752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-24 02:40:29.925759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-24 02:40:29.925765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-24 02:40:29.925772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-24 02:40:29.925778 | orchestrator | 2026-03-24 02:40:29.925785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925791 | orchestrator | Tuesday 24 March 2026 02:40:26 +0000 (0:00:00.456) 0:00:29.982 ********* 2026-03-24 02:40:29.925798 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925804 | orchestrator | 2026-03-24 02:40:29.925811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925818 | orchestrator | Tuesday 24 March 2026 02:40:26 +0000 (0:00:00.187) 0:00:30.170 ********* 2026-03-24 02:40:29.925825 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925832 | orchestrator | 2026-03-24 02:40:29.925839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925846 | orchestrator | Tuesday 24 March 2026 02:40:27 +0000 (0:00:00.182) 0:00:30.353 ********* 2026-03-24 02:40:29.925853 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925859 | orchestrator | 2026-03-24 02:40:29.925866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925872 | orchestrator | Tuesday 24 March 2026 02:40:27 +0000 (0:00:00.186) 0:00:30.539 ********* 2026-03-24 02:40:29.925879 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925885 | orchestrator | 2026-03-24 02:40:29.925892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925899 | orchestrator | Tuesday 24 March 2026 02:40:27 +0000 (0:00:00.185) 0:00:30.724 ********* 2026-03-24 02:40:29.925906 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925912 | orchestrator | 2026-03-24 02:40:29.925919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925926 | orchestrator | Tuesday 24 March 2026 02:40:27 +0000 (0:00:00.184) 0:00:30.909 ********* 2026-03-24 02:40:29.925933 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925940 | orchestrator | 2026-03-24 02:40:29.925947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925953 | orchestrator | Tuesday 24 March 2026 02:40:27 +0000 (0:00:00.187) 0:00:31.097 ********* 2026-03-24 02:40:29.925960 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925967 | orchestrator | 2026-03-24 02:40:29.925974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.925981 | orchestrator | Tuesday 24 March 2026 02:40:27 +0000 (0:00:00.186) 0:00:31.284 ********* 2026-03-24 02:40:29.925987 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.925994 | orchestrator | 2026-03-24 02:40:29.926000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.926007 | orchestrator | Tuesday 24 March 2026 02:40:28 +0000 (0:00:00.184) 0:00:31.468 ********* 2026-03-24 02:40:29.926014 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-24 02:40:29.926073 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-24 02:40:29.926081 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-24 02:40:29.926087 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-24 02:40:29.926094 | orchestrator | 2026-03-24 02:40:29.926101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.926108 | orchestrator | Tuesday 24 March 2026 02:40:28 +0000 (0:00:00.716) 0:00:32.185 ********* 2026-03-24 02:40:29.926123 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.926130 | orchestrator | 2026-03-24 02:40:29.926137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.926144 | orchestrator | Tuesday 24 March 2026 02:40:29 +0000 (0:00:00.201) 0:00:32.387 ********* 2026-03-24 02:40:29.926150 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.926157 | orchestrator | 2026-03-24 02:40:29.926164 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.926171 | orchestrator | Tuesday 24 March 2026 02:40:29 +0000 (0:00:00.182) 0:00:32.569 ********* 2026-03-24 02:40:29.926178 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.926184 | orchestrator | 2026-03-24 02:40:29.926191 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:40:29.926198 | orchestrator | Tuesday 24 March 2026 02:40:29 +0000 (0:00:00.472) 0:00:33.042 ********* 2026-03-24 02:40:29.926205 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:29.926212 | orchestrator | 2026-03-24 02:40:29.926227 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-24 02:40:33.442278 | orchestrator | Tuesday 24 March 2026 02:40:29 +0000 (0:00:00.179) 0:00:33.221 ********* 2026-03-24 02:40:33.442371 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-24 02:40:33.442382 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-24 02:40:33.442388 | orchestrator | 2026-03-24 02:40:33.442395 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-24 02:40:33.442401 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.154) 0:00:33.376 ********* 2026-03-24 02:40:33.442409 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442416 | orchestrator | 2026-03-24 02:40:33.442439 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-24 02:40:33.442446 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.117) 0:00:33.493 ********* 2026-03-24 02:40:33.442452 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442459 | orchestrator | 2026-03-24 02:40:33.442465 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-24 02:40:33.442472 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.118) 0:00:33.611 ********* 2026-03-24 02:40:33.442478 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442484 | orchestrator | 2026-03-24 02:40:33.442490 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-24 02:40:33.442496 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.133) 0:00:33.745 ********* 2026-03-24 02:40:33.442502 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:40:33.442510 | orchestrator | 2026-03-24 02:40:33.442517 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-24 02:40:33.442587 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.118) 0:00:33.863 ********* 2026-03-24 02:40:33.442595 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dc39596-c9fc-583d-89f8-392d010fb80f'}}) 2026-03-24 02:40:33.442602 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}}) 2026-03-24 02:40:33.442609 | orchestrator | 2026-03-24 02:40:33.442616 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-24 02:40:33.442623 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.150) 0:00:34.014 ********* 2026-03-24 02:40:33.442631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dc39596-c9fc-583d-89f8-392d010fb80f'}})  2026-03-24 02:40:33.442639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}})  2026-03-24 02:40:33.442645 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442652 | orchestrator | 2026-03-24 02:40:33.442660 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-24 02:40:33.442688 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.129) 0:00:34.144 ********* 2026-03-24 02:40:33.442695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dc39596-c9fc-583d-89f8-392d010fb80f'}})  2026-03-24 02:40:33.442702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}})  2026-03-24 02:40:33.442708 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442715 | orchestrator | 2026-03-24 02:40:33.442721 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-24 02:40:33.442729 | orchestrator | Tuesday 24 March 2026 02:40:30 +0000 (0:00:00.137) 0:00:34.282 ********* 2026-03-24 02:40:33.442736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dc39596-c9fc-583d-89f8-392d010fb80f'}})  2026-03-24 02:40:33.442742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}})  2026-03-24 02:40:33.442749 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442756 | orchestrator | 2026-03-24 02:40:33.442764 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-24 02:40:33.442771 | orchestrator | Tuesday 24 March 2026 02:40:31 +0000 (0:00:00.135) 0:00:34.417 ********* 2026-03-24 02:40:33.442778 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:40:33.442785 | orchestrator | 2026-03-24 02:40:33.442792 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-24 02:40:33.442799 | orchestrator | Tuesday 24 March 2026 02:40:31 +0000 (0:00:00.123) 0:00:34.541 ********* 2026-03-24 02:40:33.442807 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:40:33.442814 | orchestrator | 2026-03-24 02:40:33.442820 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-24 02:40:33.442828 | orchestrator | Tuesday 24 March 2026 02:40:31 +0000 (0:00:00.250) 0:00:34.792 ********* 2026-03-24 02:40:33.442836 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442844 | orchestrator | 2026-03-24 02:40:33.442852 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-24 02:40:33.442860 | orchestrator | Tuesday 24 March 2026 02:40:31 +0000 (0:00:00.128) 0:00:34.921 ********* 2026-03-24 02:40:33.442868 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442876 | orchestrator | 2026-03-24 02:40:33.442884 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-24 02:40:33.442892 | orchestrator | Tuesday 24 March 2026 02:40:31 +0000 (0:00:00.116) 0:00:35.037 ********* 2026-03-24 02:40:33.442900 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.442908 | orchestrator | 2026-03-24 02:40:33.442916 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-24 02:40:33.442922 | orchestrator | Tuesday 24 March 2026 02:40:31 +0000 (0:00:00.123) 0:00:35.161 ********* 2026-03-24 02:40:33.442930 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 02:40:33.442938 | orchestrator |  "ceph_osd_devices": { 2026-03-24 02:40:33.442946 | orchestrator |  "sdb": { 2026-03-24 02:40:33.442971 | orchestrator |  "osd_lvm_uuid": "7dc39596-c9fc-583d-89f8-392d010fb80f" 2026-03-24 02:40:33.442980 | orchestrator |  }, 2026-03-24 02:40:33.442987 | orchestrator |  "sdc": { 2026-03-24 02:40:33.442994 | orchestrator |  "osd_lvm_uuid": "7e9350b0-7da1-52b7-a847-2b8ea41c8f59" 2026-03-24 02:40:33.443001 | orchestrator |  } 2026-03-24 02:40:33.443009 | orchestrator |  } 2026-03-24 02:40:33.443016 | orchestrator | } 2026-03-24 02:40:33.443025 | orchestrator | 2026-03-24 02:40:33.443033 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-24 02:40:33.443041 | orchestrator | Tuesday 24 March 2026 02:40:31 +0000 (0:00:00.129) 0:00:35.290 ********* 2026-03-24 02:40:33.443056 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.443062 | orchestrator | 2026-03-24 02:40:33.443069 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-24 02:40:33.443088 | orchestrator | Tuesday 24 March 2026 02:40:32 +0000 (0:00:00.123) 0:00:35.414 ********* 2026-03-24 02:40:33.443094 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.443101 | orchestrator | 2026-03-24 02:40:33.443107 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-24 02:40:33.443113 | orchestrator | Tuesday 24 March 2026 02:40:32 +0000 (0:00:00.124) 0:00:35.538 ********* 2026-03-24 02:40:33.443120 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:40:33.443126 | orchestrator | 2026-03-24 02:40:33.443133 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-24 02:40:33.443138 | orchestrator | Tuesday 24 March 2026 02:40:32 +0000 (0:00:00.121) 0:00:35.659 ********* 2026-03-24 02:40:33.443145 | orchestrator | changed: [testbed-node-5] => { 2026-03-24 02:40:33.443151 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-24 02:40:33.443158 | orchestrator |  "ceph_osd_devices": { 2026-03-24 02:40:33.443165 | orchestrator |  "sdb": { 2026-03-24 02:40:33.443172 | orchestrator |  "osd_lvm_uuid": "7dc39596-c9fc-583d-89f8-392d010fb80f" 2026-03-24 02:40:33.443178 | orchestrator |  }, 2026-03-24 02:40:33.443185 | orchestrator |  "sdc": { 2026-03-24 02:40:33.443192 | orchestrator |  "osd_lvm_uuid": "7e9350b0-7da1-52b7-a847-2b8ea41c8f59" 2026-03-24 02:40:33.443198 | orchestrator |  } 2026-03-24 02:40:33.443204 | orchestrator |  }, 2026-03-24 02:40:33.443210 | orchestrator |  "lvm_volumes": [ 2026-03-24 02:40:33.443216 | orchestrator |  { 2026-03-24 02:40:33.443222 | orchestrator |  "data": "osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f", 2026-03-24 02:40:33.443229 | orchestrator |  "data_vg": "ceph-7dc39596-c9fc-583d-89f8-392d010fb80f" 2026-03-24 02:40:33.443235 | orchestrator |  }, 2026-03-24 02:40:33.443241 | orchestrator |  { 2026-03-24 02:40:33.443248 | orchestrator |  "data": "osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59", 2026-03-24 02:40:33.443255 | orchestrator |  "data_vg": "ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59" 2026-03-24 02:40:33.443262 | orchestrator |  } 2026-03-24 02:40:33.443268 | orchestrator |  ] 2026-03-24 02:40:33.443275 | orchestrator |  } 2026-03-24 02:40:33.443283 | orchestrator | } 2026-03-24 02:40:33.443289 | orchestrator | 2026-03-24 02:40:33.443295 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-24 02:40:33.443301 | orchestrator | Tuesday 24 March 2026 02:40:32 +0000 (0:00:00.191) 0:00:35.851 ********* 2026-03-24 02:40:33.443308 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-24 02:40:33.443315 | orchestrator | 2026-03-24 02:40:33.443321 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:40:33.443328 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-24 02:40:33.443337 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-24 02:40:33.443344 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-24 02:40:33.443350 | orchestrator | 2026-03-24 02:40:33.443356 | orchestrator | 2026-03-24 02:40:33.443362 | orchestrator | 2026-03-24 02:40:33.443368 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:40:33.443375 | orchestrator | Tuesday 24 March 2026 02:40:33 +0000 (0:00:00.876) 0:00:36.727 ********* 2026-03-24 02:40:33.443382 | orchestrator | =============================================================================== 2026-03-24 02:40:33.443389 | orchestrator | Write configuration file ------------------------------------------------ 3.47s 2026-03-24 02:40:33.443396 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2026-03-24 02:40:33.443403 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2026-03-24 02:40:33.443419 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-03-24 02:40:33.443426 | orchestrator | Print configuration data ------------------------------------------------ 0.80s 2026-03-24 02:40:33.443433 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-24 02:40:33.443440 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-03-24 02:40:33.443446 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-03-24 02:40:33.443453 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-24 02:40:33.443459 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2026-03-24 02:40:33.443465 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2026-03-24 02:40:33.443472 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-03-24 02:40:33.443479 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-03-24 02:40:33.443498 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.53s 2026-03-24 02:40:33.669169 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-03-24 02:40:33.669249 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-03-24 02:40:33.669259 | orchestrator | Set OSD devices config data --------------------------------------------- 0.49s 2026-03-24 02:40:33.669266 | orchestrator | Add known partitions to the list of available block devices ------------- 0.47s 2026-03-24 02:40:33.669290 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.47s 2026-03-24 02:40:33.669297 | orchestrator | Add known partitions to the list of available block devices ------------- 0.46s 2026-03-24 02:40:55.926421 | orchestrator | 2026-03-24 02:40:55 | INFO  | Task b9645025-73c9-49b7-8789-424757ae4fd6 (sync inventory) is running in background. Output coming soon. 2026-03-24 02:41:20.438406 | orchestrator | 2026-03-24 02:40:57 | INFO  | Starting group_vars file reorganization 2026-03-24 02:41:20.438482 | orchestrator | 2026-03-24 02:40:57 | INFO  | Moved 0 file(s) to their respective directories 2026-03-24 02:41:20.438489 | orchestrator | 2026-03-24 02:40:57 | INFO  | Group_vars file reorganization completed 2026-03-24 02:41:20.438493 | orchestrator | 2026-03-24 02:40:59 | INFO  | Starting variable preparation from inventory 2026-03-24 02:41:20.438498 | orchestrator | 2026-03-24 02:41:02 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-24 02:41:20.438502 | orchestrator | 2026-03-24 02:41:02 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-24 02:41:20.438506 | orchestrator | 2026-03-24 02:41:02 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-24 02:41:20.438510 | orchestrator | 2026-03-24 02:41:02 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-24 02:41:20.438515 | orchestrator | 2026-03-24 02:41:02 | INFO  | Variable preparation completed 2026-03-24 02:41:20.438520 | orchestrator | 2026-03-24 02:41:03 | INFO  | Starting inventory overwrite handling 2026-03-24 02:41:20.438527 | orchestrator | 2026-03-24 02:41:03 | INFO  | Handling group overwrites in 99-overwrite 2026-03-24 02:41:20.438535 | orchestrator | 2026-03-24 02:41:03 | INFO  | Removing group frr:children from 60-generic 2026-03-24 02:41:20.438546 | orchestrator | 2026-03-24 02:41:03 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-24 02:41:20.438552 | orchestrator | 2026-03-24 02:41:03 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-24 02:41:20.438617 | orchestrator | 2026-03-24 02:41:03 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-24 02:41:20.438700 | orchestrator | 2026-03-24 02:41:03 | INFO  | Handling group overwrites in 20-roles 2026-03-24 02:41:20.438708 | orchestrator | 2026-03-24 02:41:03 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-24 02:41:20.438712 | orchestrator | 2026-03-24 02:41:03 | INFO  | Removed 5 group(s) in total 2026-03-24 02:41:20.438716 | orchestrator | 2026-03-24 02:41:03 | INFO  | Inventory overwrite handling completed 2026-03-24 02:41:20.438720 | orchestrator | 2026-03-24 02:41:05 | INFO  | Starting merge of inventory files 2026-03-24 02:41:20.438723 | orchestrator | 2026-03-24 02:41:05 | INFO  | Inventory files merged successfully 2026-03-24 02:41:20.438727 | orchestrator | 2026-03-24 02:41:09 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-24 02:41:20.438731 | orchestrator | 2026-03-24 02:41:19 | INFO  | Successfully wrote ClusterShell configuration 2026-03-24 02:41:20.438735 | orchestrator | [master 8b958b7] 2026-03-24-02-41 2026-03-24 02:41:20.438741 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-24 02:41:22.347633 | orchestrator | 2026-03-24 02:41:22 | INFO  | Task 784455c4-f3f2-4488-8795-6a62935a172d (ceph-create-lvm-devices) was prepared for execution. 2026-03-24 02:41:22.347706 | orchestrator | 2026-03-24 02:41:22 | INFO  | It takes a moment until task 784455c4-f3f2-4488-8795-6a62935a172d (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-24 02:41:32.499770 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-24 02:41:32.500880 | orchestrator | 2.16.14 2026-03-24 02:41:32.500961 | orchestrator | 2026-03-24 02:41:32.500976 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-24 02:41:32.500990 | orchestrator | 2026-03-24 02:41:32.501001 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-24 02:41:32.501013 | orchestrator | Tuesday 24 March 2026 02:41:26 +0000 (0:00:00.227) 0:00:00.227 ********* 2026-03-24 02:41:32.501025 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 02:41:32.501036 | orchestrator | 2026-03-24 02:41:32.501048 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-24 02:41:32.501059 | orchestrator | Tuesday 24 March 2026 02:41:26 +0000 (0:00:00.229) 0:00:00.457 ********* 2026-03-24 02:41:32.501070 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:32.501081 | orchestrator | 2026-03-24 02:41:32.501092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501103 | orchestrator | Tuesday 24 March 2026 02:41:26 +0000 (0:00:00.206) 0:00:00.663 ********* 2026-03-24 02:41:32.501113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-24 02:41:32.501125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-24 02:41:32.501136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-24 02:41:32.501165 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-24 02:41:32.501176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-24 02:41:32.501188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-24 02:41:32.501198 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-24 02:41:32.501209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-24 02:41:32.501220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-24 02:41:32.501231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-24 02:41:32.501242 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-24 02:41:32.501281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-24 02:41:32.501293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-24 02:41:32.501303 | orchestrator | 2026-03-24 02:41:32.501314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501325 | orchestrator | Tuesday 24 March 2026 02:41:26 +0000 (0:00:00.423) 0:00:01.087 ********* 2026-03-24 02:41:32.501336 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501347 | orchestrator | 2026-03-24 02:41:32.501358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501369 | orchestrator | Tuesday 24 March 2026 02:41:27 +0000 (0:00:00.180) 0:00:01.268 ********* 2026-03-24 02:41:32.501380 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501391 | orchestrator | 2026-03-24 02:41:32.501401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501412 | orchestrator | Tuesday 24 March 2026 02:41:27 +0000 (0:00:00.204) 0:00:01.472 ********* 2026-03-24 02:41:32.501430 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501449 | orchestrator | 2026-03-24 02:41:32.501468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501486 | orchestrator | Tuesday 24 March 2026 02:41:27 +0000 (0:00:00.177) 0:00:01.650 ********* 2026-03-24 02:41:32.501503 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501519 | orchestrator | 2026-03-24 02:41:32.501538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501555 | orchestrator | Tuesday 24 March 2026 02:41:27 +0000 (0:00:00.173) 0:00:01.823 ********* 2026-03-24 02:41:32.501609 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501628 | orchestrator | 2026-03-24 02:41:32.501648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501667 | orchestrator | Tuesday 24 March 2026 02:41:27 +0000 (0:00:00.168) 0:00:01.991 ********* 2026-03-24 02:41:32.501686 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501705 | orchestrator | 2026-03-24 02:41:32.501723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501739 | orchestrator | Tuesday 24 March 2026 02:41:28 +0000 (0:00:00.193) 0:00:02.184 ********* 2026-03-24 02:41:32.501757 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501774 | orchestrator | 2026-03-24 02:41:32.501792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501812 | orchestrator | Tuesday 24 March 2026 02:41:28 +0000 (0:00:00.194) 0:00:02.379 ********* 2026-03-24 02:41:32.501832 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.501852 | orchestrator | 2026-03-24 02:41:32.501870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.501922 | orchestrator | Tuesday 24 March 2026 02:41:28 +0000 (0:00:00.186) 0:00:02.565 ********* 2026-03-24 02:41:32.502056 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8) 2026-03-24 02:41:32.502080 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8) 2026-03-24 02:41:32.502092 | orchestrator | 2026-03-24 02:41:32.502103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.502147 | orchestrator | Tuesday 24 March 2026 02:41:28 +0000 (0:00:00.405) 0:00:02.970 ********* 2026-03-24 02:41:32.502159 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d) 2026-03-24 02:41:32.502171 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d) 2026-03-24 02:41:32.502181 | orchestrator | 2026-03-24 02:41:32.502192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.502203 | orchestrator | Tuesday 24 March 2026 02:41:29 +0000 (0:00:00.508) 0:00:03.479 ********* 2026-03-24 02:41:32.502214 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b) 2026-03-24 02:41:32.502283 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b) 2026-03-24 02:41:32.502297 | orchestrator | 2026-03-24 02:41:32.502308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.502319 | orchestrator | Tuesday 24 March 2026 02:41:29 +0000 (0:00:00.533) 0:00:04.012 ********* 2026-03-24 02:41:32.502330 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f) 2026-03-24 02:41:32.502341 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f) 2026-03-24 02:41:32.502352 | orchestrator | 2026-03-24 02:41:32.502363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:32.502383 | orchestrator | Tuesday 24 March 2026 02:41:30 +0000 (0:00:00.649) 0:00:04.661 ********* 2026-03-24 02:41:32.502396 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-24 02:41:32.502408 | orchestrator | 2026-03-24 02:41:32.502419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.502431 | orchestrator | Tuesday 24 March 2026 02:41:30 +0000 (0:00:00.287) 0:00:04.949 ********* 2026-03-24 02:41:32.502442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-24 02:41:32.502453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-24 02:41:32.502464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-24 02:41:32.502475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-24 02:41:32.502485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-24 02:41:32.502496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-24 02:41:32.502507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-24 02:41:32.502518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-24 02:41:32.502529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-24 02:41:32.502539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-24 02:41:32.502550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-24 02:41:32.502561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-24 02:41:32.502661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-24 02:41:32.502672 | orchestrator | 2026-03-24 02:41:32.502683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.502694 | orchestrator | Tuesday 24 March 2026 02:41:31 +0000 (0:00:00.367) 0:00:05.316 ********* 2026-03-24 02:41:32.502705 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.502716 | orchestrator | 2026-03-24 02:41:32.502727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.502738 | orchestrator | Tuesday 24 March 2026 02:41:31 +0000 (0:00:00.176) 0:00:05.493 ********* 2026-03-24 02:41:32.502749 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.502759 | orchestrator | 2026-03-24 02:41:32.502771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.502782 | orchestrator | Tuesday 24 March 2026 02:41:31 +0000 (0:00:00.180) 0:00:05.673 ********* 2026-03-24 02:41:32.502792 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.502803 | orchestrator | 2026-03-24 02:41:32.502814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.502840 | orchestrator | Tuesday 24 March 2026 02:41:31 +0000 (0:00:00.182) 0:00:05.855 ********* 2026-03-24 02:41:32.502860 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.502878 | orchestrator | 2026-03-24 02:41:32.502897 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.502916 | orchestrator | Tuesday 24 March 2026 02:41:31 +0000 (0:00:00.178) 0:00:06.033 ********* 2026-03-24 02:41:32.502935 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.502955 | orchestrator | 2026-03-24 02:41:32.502977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.502996 | orchestrator | Tuesday 24 March 2026 02:41:32 +0000 (0:00:00.186) 0:00:06.220 ********* 2026-03-24 02:41:32.503015 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.503031 | orchestrator | 2026-03-24 02:41:32.503043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:32.503053 | orchestrator | Tuesday 24 March 2026 02:41:32 +0000 (0:00:00.187) 0:00:06.407 ********* 2026-03-24 02:41:32.503145 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:32.503159 | orchestrator | 2026-03-24 02:41:32.503183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:39.847395 | orchestrator | Tuesday 24 March 2026 02:41:32 +0000 (0:00:00.181) 0:00:06.588 ********* 2026-03-24 02:41:39.847483 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847492 | orchestrator | 2026-03-24 02:41:39.847498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:39.847505 | orchestrator | Tuesday 24 March 2026 02:41:32 +0000 (0:00:00.445) 0:00:07.034 ********* 2026-03-24 02:41:39.847510 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-24 02:41:39.847516 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-24 02:41:39.847522 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-24 02:41:39.847527 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-24 02:41:39.847532 | orchestrator | 2026-03-24 02:41:39.847538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:39.847543 | orchestrator | Tuesday 24 March 2026 02:41:33 +0000 (0:00:00.586) 0:00:07.620 ********* 2026-03-24 02:41:39.847548 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847553 | orchestrator | 2026-03-24 02:41:39.847558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:39.847563 | orchestrator | Tuesday 24 March 2026 02:41:33 +0000 (0:00:00.177) 0:00:07.797 ********* 2026-03-24 02:41:39.847568 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847636 | orchestrator | 2026-03-24 02:41:39.847641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:39.847646 | orchestrator | Tuesday 24 March 2026 02:41:33 +0000 (0:00:00.187) 0:00:07.985 ********* 2026-03-24 02:41:39.847665 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847670 | orchestrator | 2026-03-24 02:41:39.847675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:41:39.847681 | orchestrator | Tuesday 24 March 2026 02:41:34 +0000 (0:00:00.179) 0:00:08.164 ********* 2026-03-24 02:41:39.847686 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847691 | orchestrator | 2026-03-24 02:41:39.847696 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-24 02:41:39.847701 | orchestrator | Tuesday 24 March 2026 02:41:34 +0000 (0:00:00.172) 0:00:08.336 ********* 2026-03-24 02:41:39.847706 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847712 | orchestrator | 2026-03-24 02:41:39.847717 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-24 02:41:39.847722 | orchestrator | Tuesday 24 March 2026 02:41:34 +0000 (0:00:00.131) 0:00:08.468 ********* 2026-03-24 02:41:39.847728 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d21def1-f46f-5673-adc8-800ee07d688b'}}) 2026-03-24 02:41:39.847734 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd7857bb6-ee47-5754-bddf-a4c3c3300a80'}}) 2026-03-24 02:41:39.847754 | orchestrator | 2026-03-24 02:41:39.847759 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-24 02:41:39.847765 | orchestrator | Tuesday 24 March 2026 02:41:34 +0000 (0:00:00.169) 0:00:08.637 ********* 2026-03-24 02:41:39.847772 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}) 2026-03-24 02:41:39.847778 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}) 2026-03-24 02:41:39.847783 | orchestrator | 2026-03-24 02:41:39.847788 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-24 02:41:39.847794 | orchestrator | Tuesday 24 March 2026 02:41:36 +0000 (0:00:01.927) 0:00:10.564 ********* 2026-03-24 02:41:39.847799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.847805 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.847811 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847816 | orchestrator | 2026-03-24 02:41:39.847821 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-24 02:41:39.847826 | orchestrator | Tuesday 24 March 2026 02:41:36 +0000 (0:00:00.147) 0:00:10.712 ********* 2026-03-24 02:41:39.847831 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}) 2026-03-24 02:41:39.847836 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}) 2026-03-24 02:41:39.847842 | orchestrator | 2026-03-24 02:41:39.847847 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-24 02:41:39.847852 | orchestrator | Tuesday 24 March 2026 02:41:38 +0000 (0:00:01.501) 0:00:12.214 ********* 2026-03-24 02:41:39.847873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.847878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.847884 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847889 | orchestrator | 2026-03-24 02:41:39.847894 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-24 02:41:39.847899 | orchestrator | Tuesday 24 March 2026 02:41:38 +0000 (0:00:00.137) 0:00:12.352 ********* 2026-03-24 02:41:39.847917 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847923 | orchestrator | 2026-03-24 02:41:39.847928 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-24 02:41:39.847933 | orchestrator | Tuesday 24 March 2026 02:41:38 +0000 (0:00:00.238) 0:00:12.590 ********* 2026-03-24 02:41:39.847938 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.847945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.847951 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847956 | orchestrator | 2026-03-24 02:41:39.847962 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-24 02:41:39.847968 | orchestrator | Tuesday 24 March 2026 02:41:38 +0000 (0:00:00.144) 0:00:12.735 ********* 2026-03-24 02:41:39.847974 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.847979 | orchestrator | 2026-03-24 02:41:39.847989 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-24 02:41:39.847995 | orchestrator | Tuesday 24 March 2026 02:41:38 +0000 (0:00:00.127) 0:00:12.862 ********* 2026-03-24 02:41:39.848005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.848011 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.848017 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.848023 | orchestrator | 2026-03-24 02:41:39.848028 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-24 02:41:39.848034 | orchestrator | Tuesday 24 March 2026 02:41:38 +0000 (0:00:00.141) 0:00:13.003 ********* 2026-03-24 02:41:39.848040 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.848046 | orchestrator | 2026-03-24 02:41:39.848052 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-24 02:41:39.848058 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.128) 0:00:13.132 ********* 2026-03-24 02:41:39.848064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.848069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.848076 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.848081 | orchestrator | 2026-03-24 02:41:39.848087 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-24 02:41:39.848093 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.137) 0:00:13.269 ********* 2026-03-24 02:41:39.848099 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:39.848105 | orchestrator | 2026-03-24 02:41:39.848111 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-24 02:41:39.848117 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.132) 0:00:13.401 ********* 2026-03-24 02:41:39.848123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.848129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.848135 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.848141 | orchestrator | 2026-03-24 02:41:39.848147 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-24 02:41:39.848153 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.136) 0:00:13.538 ********* 2026-03-24 02:41:39.848158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.848164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.848170 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.848176 | orchestrator | 2026-03-24 02:41:39.848181 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-24 02:41:39.848187 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.141) 0:00:13.679 ********* 2026-03-24 02:41:39.848193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:39.848199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:39.848205 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.848214 | orchestrator | 2026-03-24 02:41:39.848220 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-24 02:41:39.848226 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.135) 0:00:13.814 ********* 2026-03-24 02:41:39.848232 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:39.848238 | orchestrator | 2026-03-24 02:41:39.848244 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-24 02:41:39.848253 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.122) 0:00:13.937 ********* 2026-03-24 02:41:45.568020 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.568918 | orchestrator | 2026-03-24 02:41:45.568967 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-24 02:41:45.568982 | orchestrator | Tuesday 24 March 2026 02:41:39 +0000 (0:00:00.123) 0:00:14.061 ********* 2026-03-24 02:41:45.568993 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.569005 | orchestrator | 2026-03-24 02:41:45.569018 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-24 02:41:45.569029 | orchestrator | Tuesday 24 March 2026 02:41:40 +0000 (0:00:00.238) 0:00:14.299 ********* 2026-03-24 02:41:45.569040 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 02:41:45.569078 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-24 02:41:45.569096 | orchestrator | } 2026-03-24 02:41:45.569122 | orchestrator | 2026-03-24 02:41:45.569147 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-24 02:41:45.569166 | orchestrator | Tuesday 24 March 2026 02:41:40 +0000 (0:00:00.122) 0:00:14.422 ********* 2026-03-24 02:41:45.569184 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 02:41:45.569202 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-24 02:41:45.569221 | orchestrator | } 2026-03-24 02:41:45.569239 | orchestrator | 2026-03-24 02:41:45.569257 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-24 02:41:45.569276 | orchestrator | Tuesday 24 March 2026 02:41:40 +0000 (0:00:00.135) 0:00:14.557 ********* 2026-03-24 02:41:45.569295 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 02:41:45.569336 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-24 02:41:45.569357 | orchestrator | } 2026-03-24 02:41:45.569376 | orchestrator | 2026-03-24 02:41:45.569395 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-24 02:41:45.569415 | orchestrator | Tuesday 24 March 2026 02:41:40 +0000 (0:00:00.133) 0:00:14.690 ********* 2026-03-24 02:41:45.569435 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:45.569454 | orchestrator | 2026-03-24 02:41:45.569474 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-24 02:41:45.569493 | orchestrator | Tuesday 24 March 2026 02:41:41 +0000 (0:00:00.632) 0:00:15.322 ********* 2026-03-24 02:41:45.569513 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:45.569532 | orchestrator | 2026-03-24 02:41:45.569552 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-24 02:41:45.569564 | orchestrator | Tuesday 24 March 2026 02:41:41 +0000 (0:00:00.530) 0:00:15.853 ********* 2026-03-24 02:41:45.569625 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:45.569639 | orchestrator | 2026-03-24 02:41:45.569650 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-24 02:41:45.569660 | orchestrator | Tuesday 24 March 2026 02:41:42 +0000 (0:00:00.527) 0:00:16.380 ********* 2026-03-24 02:41:45.569671 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:45.569682 | orchestrator | 2026-03-24 02:41:45.569693 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-24 02:41:45.569704 | orchestrator | Tuesday 24 March 2026 02:41:42 +0000 (0:00:00.138) 0:00:16.519 ********* 2026-03-24 02:41:45.569714 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.569725 | orchestrator | 2026-03-24 02:41:45.569736 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-24 02:41:45.569746 | orchestrator | Tuesday 24 March 2026 02:41:42 +0000 (0:00:00.102) 0:00:16.622 ********* 2026-03-24 02:41:45.569757 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.569795 | orchestrator | 2026-03-24 02:41:45.569806 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-24 02:41:45.569817 | orchestrator | Tuesday 24 March 2026 02:41:42 +0000 (0:00:00.099) 0:00:16.721 ********* 2026-03-24 02:41:45.569828 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 02:41:45.569839 | orchestrator |  "vgs_report": { 2026-03-24 02:41:45.569850 | orchestrator |  "vg": [] 2026-03-24 02:41:45.569861 | orchestrator |  } 2026-03-24 02:41:45.569872 | orchestrator | } 2026-03-24 02:41:45.569883 | orchestrator | 2026-03-24 02:41:45.569893 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-24 02:41:45.569904 | orchestrator | Tuesday 24 March 2026 02:41:42 +0000 (0:00:00.128) 0:00:16.850 ********* 2026-03-24 02:41:45.569915 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.569925 | orchestrator | 2026-03-24 02:41:45.569936 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-24 02:41:45.569947 | orchestrator | Tuesday 24 March 2026 02:41:42 +0000 (0:00:00.112) 0:00:16.962 ********* 2026-03-24 02:41:45.569957 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.569968 | orchestrator | 2026-03-24 02:41:45.569978 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-24 02:41:45.570106 | orchestrator | Tuesday 24 March 2026 02:41:43 +0000 (0:00:00.247) 0:00:17.210 ********* 2026-03-24 02:41:45.570120 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570131 | orchestrator | 2026-03-24 02:41:45.570141 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-24 02:41:45.570152 | orchestrator | Tuesday 24 March 2026 02:41:43 +0000 (0:00:00.146) 0:00:17.357 ********* 2026-03-24 02:41:45.570163 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570173 | orchestrator | 2026-03-24 02:41:45.570208 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-24 02:41:45.570219 | orchestrator | Tuesday 24 March 2026 02:41:43 +0000 (0:00:00.123) 0:00:17.480 ********* 2026-03-24 02:41:45.570230 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570241 | orchestrator | 2026-03-24 02:41:45.570251 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-24 02:41:45.570262 | orchestrator | Tuesday 24 March 2026 02:41:43 +0000 (0:00:00.130) 0:00:17.611 ********* 2026-03-24 02:41:45.570273 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570284 | orchestrator | 2026-03-24 02:41:45.570294 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-24 02:41:45.570305 | orchestrator | Tuesday 24 March 2026 02:41:43 +0000 (0:00:00.105) 0:00:17.717 ********* 2026-03-24 02:41:45.570316 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570327 | orchestrator | 2026-03-24 02:41:45.570337 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-24 02:41:45.570348 | orchestrator | Tuesday 24 March 2026 02:41:43 +0000 (0:00:00.126) 0:00:17.843 ********* 2026-03-24 02:41:45.570384 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570395 | orchestrator | 2026-03-24 02:41:45.570406 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-24 02:41:45.570450 | orchestrator | Tuesday 24 March 2026 02:41:43 +0000 (0:00:00.128) 0:00:17.972 ********* 2026-03-24 02:41:45.570463 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570477 | orchestrator | 2026-03-24 02:41:45.570495 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-24 02:41:45.570514 | orchestrator | Tuesday 24 March 2026 02:41:44 +0000 (0:00:00.130) 0:00:18.103 ********* 2026-03-24 02:41:45.570532 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570549 | orchestrator | 2026-03-24 02:41:45.570568 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-24 02:41:45.570613 | orchestrator | Tuesday 24 March 2026 02:41:44 +0000 (0:00:00.126) 0:00:18.229 ********* 2026-03-24 02:41:45.570631 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570648 | orchestrator | 2026-03-24 02:41:45.570682 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-24 02:41:45.570700 | orchestrator | Tuesday 24 March 2026 02:41:44 +0000 (0:00:00.145) 0:00:18.375 ********* 2026-03-24 02:41:45.570717 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570737 | orchestrator | 2026-03-24 02:41:45.570753 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-24 02:41:45.570768 | orchestrator | Tuesday 24 March 2026 02:41:44 +0000 (0:00:00.113) 0:00:18.489 ********* 2026-03-24 02:41:45.570797 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570817 | orchestrator | 2026-03-24 02:41:45.570836 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-24 02:41:45.570854 | orchestrator | Tuesday 24 March 2026 02:41:44 +0000 (0:00:00.130) 0:00:18.619 ********* 2026-03-24 02:41:45.570871 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570890 | orchestrator | 2026-03-24 02:41:45.570909 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-24 02:41:45.570927 | orchestrator | Tuesday 24 March 2026 02:41:44 +0000 (0:00:00.234) 0:00:18.854 ********* 2026-03-24 02:41:45.570947 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:45.570967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:45.570985 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.570999 | orchestrator | 2026-03-24 02:41:45.571010 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-24 02:41:45.571020 | orchestrator | Tuesday 24 March 2026 02:41:44 +0000 (0:00:00.147) 0:00:19.001 ********* 2026-03-24 02:41:45.571031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:45.571042 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:45.571053 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.571064 | orchestrator | 2026-03-24 02:41:45.571074 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-24 02:41:45.571085 | orchestrator | Tuesday 24 March 2026 02:41:45 +0000 (0:00:00.142) 0:00:19.144 ********* 2026-03-24 02:41:45.571096 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:45.571106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:45.571117 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.571128 | orchestrator | 2026-03-24 02:41:45.571138 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-24 02:41:45.571149 | orchestrator | Tuesday 24 March 2026 02:41:45 +0000 (0:00:00.139) 0:00:19.283 ********* 2026-03-24 02:41:45.571160 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:45.571170 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:45.571181 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.571192 | orchestrator | 2026-03-24 02:41:45.571202 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-24 02:41:45.571213 | orchestrator | Tuesday 24 March 2026 02:41:45 +0000 (0:00:00.127) 0:00:19.410 ********* 2026-03-24 02:41:45.571224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:45.571244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:45.571255 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:45.571266 | orchestrator | 2026-03-24 02:41:45.571277 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-24 02:41:45.571287 | orchestrator | Tuesday 24 March 2026 02:41:45 +0000 (0:00:00.119) 0:00:19.530 ********* 2026-03-24 02:41:45.571311 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:50.150181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:50.150259 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:50.150268 | orchestrator | 2026-03-24 02:41:50.150274 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-24 02:41:50.150280 | orchestrator | Tuesday 24 March 2026 02:41:45 +0000 (0:00:00.128) 0:00:19.658 ********* 2026-03-24 02:41:50.150285 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:50.150291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:50.150296 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:50.150300 | orchestrator | 2026-03-24 02:41:50.150305 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-24 02:41:50.150310 | orchestrator | Tuesday 24 March 2026 02:41:45 +0000 (0:00:00.139) 0:00:19.798 ********* 2026-03-24 02:41:50.150327 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:50.150332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:50.150337 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:50.150342 | orchestrator | 2026-03-24 02:41:50.150349 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-24 02:41:50.150357 | orchestrator | Tuesday 24 March 2026 02:41:45 +0000 (0:00:00.146) 0:00:19.945 ********* 2026-03-24 02:41:50.150366 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:50.150379 | orchestrator | 2026-03-24 02:41:50.150386 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-24 02:41:50.150393 | orchestrator | Tuesday 24 March 2026 02:41:46 +0000 (0:00:00.515) 0:00:20.460 ********* 2026-03-24 02:41:50.150400 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:50.150407 | orchestrator | 2026-03-24 02:41:50.150414 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-24 02:41:50.150421 | orchestrator | Tuesday 24 March 2026 02:41:46 +0000 (0:00:00.512) 0:00:20.973 ********* 2026-03-24 02:41:50.150429 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:41:50.150436 | orchestrator | 2026-03-24 02:41:50.150444 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-24 02:41:50.150452 | orchestrator | Tuesday 24 March 2026 02:41:47 +0000 (0:00:00.132) 0:00:21.105 ********* 2026-03-24 02:41:50.150461 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'vg_name': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}) 2026-03-24 02:41:50.150470 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'vg_name': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}) 2026-03-24 02:41:50.150478 | orchestrator | 2026-03-24 02:41:50.150484 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-24 02:41:50.150517 | orchestrator | Tuesday 24 March 2026 02:41:47 +0000 (0:00:00.150) 0:00:21.256 ********* 2026-03-24 02:41:50.150526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:50.150534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:50.150540 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:50.150547 | orchestrator | 2026-03-24 02:41:50.150554 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-24 02:41:50.150561 | orchestrator | Tuesday 24 March 2026 02:41:47 +0000 (0:00:00.252) 0:00:21.509 ********* 2026-03-24 02:41:50.150568 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:50.150628 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:50.150638 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:50.150645 | orchestrator | 2026-03-24 02:41:50.150653 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-24 02:41:50.150661 | orchestrator | Tuesday 24 March 2026 02:41:47 +0000 (0:00:00.146) 0:00:21.655 ********* 2026-03-24 02:41:50.150668 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 02:41:50.150676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 02:41:50.150684 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:41:50.150693 | orchestrator | 2026-03-24 02:41:50.150704 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-24 02:41:50.150712 | orchestrator | Tuesday 24 March 2026 02:41:47 +0000 (0:00:00.142) 0:00:21.798 ********* 2026-03-24 02:41:50.150737 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 02:41:50.150746 | orchestrator |  "lvm_report": { 2026-03-24 02:41:50.150753 | orchestrator |  "lv": [ 2026-03-24 02:41:50.150761 | orchestrator |  { 2026-03-24 02:41:50.150769 | orchestrator |  "lv_name": "osd-block-4d21def1-f46f-5673-adc8-800ee07d688b", 2026-03-24 02:41:50.150777 | orchestrator |  "vg_name": "ceph-4d21def1-f46f-5673-adc8-800ee07d688b" 2026-03-24 02:41:50.150784 | orchestrator |  }, 2026-03-24 02:41:50.150791 | orchestrator |  { 2026-03-24 02:41:50.150798 | orchestrator |  "lv_name": "osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80", 2026-03-24 02:41:50.150805 | orchestrator |  "vg_name": "ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80" 2026-03-24 02:41:50.150813 | orchestrator |  } 2026-03-24 02:41:50.150821 | orchestrator |  ], 2026-03-24 02:41:50.150828 | orchestrator |  "pv": [ 2026-03-24 02:41:50.150837 | orchestrator |  { 2026-03-24 02:41:50.150845 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-24 02:41:50.150853 | orchestrator |  "vg_name": "ceph-4d21def1-f46f-5673-adc8-800ee07d688b" 2026-03-24 02:41:50.150861 | orchestrator |  }, 2026-03-24 02:41:50.150868 | orchestrator |  { 2026-03-24 02:41:50.150876 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-24 02:41:50.150893 | orchestrator |  "vg_name": "ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80" 2026-03-24 02:41:50.150902 | orchestrator |  } 2026-03-24 02:41:50.150910 | orchestrator |  ] 2026-03-24 02:41:50.150918 | orchestrator |  } 2026-03-24 02:41:50.150926 | orchestrator | } 2026-03-24 02:41:50.150935 | orchestrator | 2026-03-24 02:41:50.150943 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-24 02:41:50.150961 | orchestrator | 2026-03-24 02:41:50.150969 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-24 02:41:50.150977 | orchestrator | Tuesday 24 March 2026 02:41:47 +0000 (0:00:00.253) 0:00:22.051 ********* 2026-03-24 02:41:50.150985 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-24 02:41:50.150994 | orchestrator | 2026-03-24 02:41:50.151002 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-24 02:41:50.151010 | orchestrator | Tuesday 24 March 2026 02:41:48 +0000 (0:00:00.232) 0:00:22.283 ********* 2026-03-24 02:41:50.151017 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:41:50.151024 | orchestrator | 2026-03-24 02:41:50.151032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:50.151039 | orchestrator | Tuesday 24 March 2026 02:41:48 +0000 (0:00:00.219) 0:00:22.503 ********* 2026-03-24 02:41:50.151047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-24 02:41:50.151055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-24 02:41:50.151062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-24 02:41:50.151070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-24 02:41:50.151078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-24 02:41:50.151086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-24 02:41:50.151093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-24 02:41:50.151101 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-24 02:41:50.151108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-24 02:41:50.151112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-24 02:41:50.151117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-24 02:41:50.151121 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-24 02:41:50.151126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-24 02:41:50.151130 | orchestrator | 2026-03-24 02:41:50.151135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:50.151140 | orchestrator | Tuesday 24 March 2026 02:41:48 +0000 (0:00:00.364) 0:00:22.867 ********* 2026-03-24 02:41:50.151144 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:41:50.151149 | orchestrator | 2026-03-24 02:41:50.151154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:50.151162 | orchestrator | Tuesday 24 March 2026 02:41:48 +0000 (0:00:00.190) 0:00:23.058 ********* 2026-03-24 02:41:50.151172 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:41:50.151183 | orchestrator | 2026-03-24 02:41:50.151190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:50.151197 | orchestrator | Tuesday 24 March 2026 02:41:49 +0000 (0:00:00.442) 0:00:23.500 ********* 2026-03-24 02:41:50.151205 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:41:50.151212 | orchestrator | 2026-03-24 02:41:50.151218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:50.151226 | orchestrator | Tuesday 24 March 2026 02:41:49 +0000 (0:00:00.198) 0:00:23.699 ********* 2026-03-24 02:41:50.151232 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:41:50.151239 | orchestrator | 2026-03-24 02:41:50.151246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:50.151253 | orchestrator | Tuesday 24 March 2026 02:41:49 +0000 (0:00:00.179) 0:00:23.878 ********* 2026-03-24 02:41:50.151259 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:41:50.151267 | orchestrator | 2026-03-24 02:41:50.151283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:41:50.151292 | orchestrator | Tuesday 24 March 2026 02:41:49 +0000 (0:00:00.178) 0:00:24.057 ********* 2026-03-24 02:41:50.151299 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:41:50.151306 | orchestrator | 2026-03-24 02:41:50.151324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:00.050552 | orchestrator | Tuesday 24 March 2026 02:41:50 +0000 (0:00:00.183) 0:00:24.240 ********* 2026-03-24 02:42:00.050698 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.050713 | orchestrator | 2026-03-24 02:42:00.050723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:00.050733 | orchestrator | Tuesday 24 March 2026 02:41:50 +0000 (0:00:00.190) 0:00:24.431 ********* 2026-03-24 02:42:00.050742 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.050751 | orchestrator | 2026-03-24 02:42:00.050760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:00.050769 | orchestrator | Tuesday 24 March 2026 02:41:50 +0000 (0:00:00.193) 0:00:24.624 ********* 2026-03-24 02:42:00.050777 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba) 2026-03-24 02:42:00.050787 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba) 2026-03-24 02:42:00.050796 | orchestrator | 2026-03-24 02:42:00.050805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:00.050814 | orchestrator | Tuesday 24 March 2026 02:41:50 +0000 (0:00:00.393) 0:00:25.017 ********* 2026-03-24 02:42:00.050837 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710) 2026-03-24 02:42:00.050847 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710) 2026-03-24 02:42:00.050855 | orchestrator | 2026-03-24 02:42:00.050864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:00.050873 | orchestrator | Tuesday 24 March 2026 02:41:51 +0000 (0:00:00.397) 0:00:25.414 ********* 2026-03-24 02:42:00.050881 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c) 2026-03-24 02:42:00.050890 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c) 2026-03-24 02:42:00.050899 | orchestrator | 2026-03-24 02:42:00.050908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:00.050916 | orchestrator | Tuesday 24 March 2026 02:41:51 +0000 (0:00:00.538) 0:00:25.953 ********* 2026-03-24 02:42:00.050925 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b) 2026-03-24 02:42:00.050934 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b) 2026-03-24 02:42:00.050942 | orchestrator | 2026-03-24 02:42:00.050951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:00.050960 | orchestrator | Tuesday 24 March 2026 02:41:52 +0000 (0:00:00.675) 0:00:26.629 ********* 2026-03-24 02:42:00.050968 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-24 02:42:00.050977 | orchestrator | 2026-03-24 02:42:00.050986 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.050994 | orchestrator | Tuesday 24 March 2026 02:41:52 +0000 (0:00:00.324) 0:00:26.953 ********* 2026-03-24 02:42:00.051003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-24 02:42:00.051012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-24 02:42:00.051021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-24 02:42:00.051029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-24 02:42:00.051057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-24 02:42:00.051066 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-24 02:42:00.051075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-24 02:42:00.051083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-24 02:42:00.051092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-24 02:42:00.051103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-24 02:42:00.051113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-24 02:42:00.051123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-24 02:42:00.051133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-24 02:42:00.051143 | orchestrator | 2026-03-24 02:42:00.051154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051164 | orchestrator | Tuesday 24 March 2026 02:41:53 +0000 (0:00:00.406) 0:00:27.360 ********* 2026-03-24 02:42:00.051174 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051184 | orchestrator | 2026-03-24 02:42:00.051194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051204 | orchestrator | Tuesday 24 March 2026 02:41:53 +0000 (0:00:00.184) 0:00:27.544 ********* 2026-03-24 02:42:00.051214 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051224 | orchestrator | 2026-03-24 02:42:00.051234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051244 | orchestrator | Tuesday 24 March 2026 02:41:53 +0000 (0:00:00.184) 0:00:27.728 ********* 2026-03-24 02:42:00.051255 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051265 | orchestrator | 2026-03-24 02:42:00.051289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051301 | orchestrator | Tuesday 24 March 2026 02:41:53 +0000 (0:00:00.190) 0:00:27.919 ********* 2026-03-24 02:42:00.051311 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051325 | orchestrator | 2026-03-24 02:42:00.051340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051355 | orchestrator | Tuesday 24 March 2026 02:41:54 +0000 (0:00:00.205) 0:00:28.125 ********* 2026-03-24 02:42:00.051371 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051385 | orchestrator | 2026-03-24 02:42:00.051400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051416 | orchestrator | Tuesday 24 March 2026 02:41:54 +0000 (0:00:00.189) 0:00:28.314 ********* 2026-03-24 02:42:00.051431 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051447 | orchestrator | 2026-03-24 02:42:00.051461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051476 | orchestrator | Tuesday 24 March 2026 02:41:54 +0000 (0:00:00.193) 0:00:28.508 ********* 2026-03-24 02:42:00.051489 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051504 | orchestrator | 2026-03-24 02:42:00.051517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051538 | orchestrator | Tuesday 24 March 2026 02:41:54 +0000 (0:00:00.186) 0:00:28.695 ********* 2026-03-24 02:42:00.051554 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051568 | orchestrator | 2026-03-24 02:42:00.051748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051774 | orchestrator | Tuesday 24 March 2026 02:41:55 +0000 (0:00:00.473) 0:00:29.168 ********* 2026-03-24 02:42:00.051784 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-24 02:42:00.051793 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-24 02:42:00.051802 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-24 02:42:00.051824 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-24 02:42:00.051833 | orchestrator | 2026-03-24 02:42:00.051842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051850 | orchestrator | Tuesday 24 March 2026 02:41:55 +0000 (0:00:00.613) 0:00:29.781 ********* 2026-03-24 02:42:00.051859 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051868 | orchestrator | 2026-03-24 02:42:00.051877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051885 | orchestrator | Tuesday 24 March 2026 02:41:55 +0000 (0:00:00.183) 0:00:29.965 ********* 2026-03-24 02:42:00.051894 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051903 | orchestrator | 2026-03-24 02:42:00.051912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051920 | orchestrator | Tuesday 24 March 2026 02:41:56 +0000 (0:00:00.193) 0:00:30.158 ********* 2026-03-24 02:42:00.051929 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051938 | orchestrator | 2026-03-24 02:42:00.051947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:00.051956 | orchestrator | Tuesday 24 March 2026 02:41:56 +0000 (0:00:00.188) 0:00:30.347 ********* 2026-03-24 02:42:00.051964 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.051973 | orchestrator | 2026-03-24 02:42:00.051982 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-24 02:42:00.051990 | orchestrator | Tuesday 24 March 2026 02:41:56 +0000 (0:00:00.190) 0:00:30.537 ********* 2026-03-24 02:42:00.051999 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.052008 | orchestrator | 2026-03-24 02:42:00.052016 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-24 02:42:00.052025 | orchestrator | Tuesday 24 March 2026 02:41:56 +0000 (0:00:00.126) 0:00:30.663 ********* 2026-03-24 02:42:00.052034 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d735645-9e18-5d04-8028-1696940918c0'}}) 2026-03-24 02:42:00.052043 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a329e066-8536-5438-99e1-d9cc3f91f537'}}) 2026-03-24 02:42:00.052052 | orchestrator | 2026-03-24 02:42:00.052060 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-24 02:42:00.052069 | orchestrator | Tuesday 24 March 2026 02:41:56 +0000 (0:00:00.174) 0:00:30.837 ********* 2026-03-24 02:42:00.052079 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}) 2026-03-24 02:42:00.052088 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}) 2026-03-24 02:42:00.052096 | orchestrator | 2026-03-24 02:42:00.052104 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-24 02:42:00.052112 | orchestrator | Tuesday 24 March 2026 02:41:58 +0000 (0:00:01.845) 0:00:32.683 ********* 2026-03-24 02:42:00.052120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:00.052129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:00.052137 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:00.052145 | orchestrator | 2026-03-24 02:42:00.052153 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-24 02:42:00.052161 | orchestrator | Tuesday 24 March 2026 02:41:58 +0000 (0:00:00.134) 0:00:32.817 ********* 2026-03-24 02:42:00.052169 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}) 2026-03-24 02:42:00.052190 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}) 2026-03-24 02:42:05.231713 | orchestrator | 2026-03-24 02:42:05.232465 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-24 02:42:05.232503 | orchestrator | Tuesday 24 March 2026 02:42:00 +0000 (0:00:01.322) 0:00:34.140 ********* 2026-03-24 02:42:05.232515 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:05.232525 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:05.232535 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232544 | orchestrator | 2026-03-24 02:42:05.232553 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-24 02:42:05.232562 | orchestrator | Tuesday 24 March 2026 02:42:00 +0000 (0:00:00.273) 0:00:34.413 ********* 2026-03-24 02:42:05.232625 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232636 | orchestrator | 2026-03-24 02:42:05.232644 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-24 02:42:05.232649 | orchestrator | Tuesday 24 March 2026 02:42:00 +0000 (0:00:00.119) 0:00:34.532 ********* 2026-03-24 02:42:05.232655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:05.232661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:05.232666 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232671 | orchestrator | 2026-03-24 02:42:05.232676 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-24 02:42:05.232681 | orchestrator | Tuesday 24 March 2026 02:42:00 +0000 (0:00:00.140) 0:00:34.673 ********* 2026-03-24 02:42:05.232686 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232691 | orchestrator | 2026-03-24 02:42:05.232697 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-24 02:42:05.232702 | orchestrator | Tuesday 24 March 2026 02:42:00 +0000 (0:00:00.131) 0:00:34.805 ********* 2026-03-24 02:42:05.232707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:05.232712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:05.232717 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232722 | orchestrator | 2026-03-24 02:42:05.232728 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-24 02:42:05.232734 | orchestrator | Tuesday 24 March 2026 02:42:00 +0000 (0:00:00.149) 0:00:34.954 ********* 2026-03-24 02:42:05.232739 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232744 | orchestrator | 2026-03-24 02:42:05.232749 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-24 02:42:05.232754 | orchestrator | Tuesday 24 March 2026 02:42:00 +0000 (0:00:00.131) 0:00:35.086 ********* 2026-03-24 02:42:05.232759 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:05.232764 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:05.232769 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232775 | orchestrator | 2026-03-24 02:42:05.232780 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-24 02:42:05.232785 | orchestrator | Tuesday 24 March 2026 02:42:01 +0000 (0:00:00.136) 0:00:35.223 ********* 2026-03-24 02:42:05.232805 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:05.232812 | orchestrator | 2026-03-24 02:42:05.232817 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-24 02:42:05.232822 | orchestrator | Tuesday 24 March 2026 02:42:01 +0000 (0:00:00.129) 0:00:35.353 ********* 2026-03-24 02:42:05.232827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:05.232832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:05.232837 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232842 | orchestrator | 2026-03-24 02:42:05.232847 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-24 02:42:05.232852 | orchestrator | Tuesday 24 March 2026 02:42:01 +0000 (0:00:00.133) 0:00:35.487 ********* 2026-03-24 02:42:05.232857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:05.232862 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:05.232867 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232872 | orchestrator | 2026-03-24 02:42:05.232877 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-24 02:42:05.232899 | orchestrator | Tuesday 24 March 2026 02:42:01 +0000 (0:00:00.139) 0:00:35.626 ********* 2026-03-24 02:42:05.232904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:05.232909 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:05.232914 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232919 | orchestrator | 2026-03-24 02:42:05.232925 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-24 02:42:05.232930 | orchestrator | Tuesday 24 March 2026 02:42:01 +0000 (0:00:00.142) 0:00:35.768 ********* 2026-03-24 02:42:05.232935 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232939 | orchestrator | 2026-03-24 02:42:05.232944 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-24 02:42:05.232954 | orchestrator | Tuesday 24 March 2026 02:42:01 +0000 (0:00:00.257) 0:00:36.026 ********* 2026-03-24 02:42:05.232959 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232964 | orchestrator | 2026-03-24 02:42:05.232969 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-24 02:42:05.232974 | orchestrator | Tuesday 24 March 2026 02:42:02 +0000 (0:00:00.126) 0:00:36.153 ********* 2026-03-24 02:42:05.232979 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.232984 | orchestrator | 2026-03-24 02:42:05.232989 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-24 02:42:05.232994 | orchestrator | Tuesday 24 March 2026 02:42:02 +0000 (0:00:00.120) 0:00:36.273 ********* 2026-03-24 02:42:05.232999 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 02:42:05.233004 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-24 02:42:05.233009 | orchestrator | } 2026-03-24 02:42:05.233015 | orchestrator | 2026-03-24 02:42:05.233020 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-24 02:42:05.233025 | orchestrator | Tuesday 24 March 2026 02:42:02 +0000 (0:00:00.130) 0:00:36.404 ********* 2026-03-24 02:42:05.233030 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 02:42:05.233035 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-24 02:42:05.233040 | orchestrator | } 2026-03-24 02:42:05.233045 | orchestrator | 2026-03-24 02:42:05.233050 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-24 02:42:05.233059 | orchestrator | Tuesday 24 March 2026 02:42:02 +0000 (0:00:00.127) 0:00:36.531 ********* 2026-03-24 02:42:05.233065 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 02:42:05.233070 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-24 02:42:05.233075 | orchestrator | } 2026-03-24 02:42:05.233080 | orchestrator | 2026-03-24 02:42:05.233085 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-24 02:42:05.233090 | orchestrator | Tuesday 24 March 2026 02:42:02 +0000 (0:00:00.119) 0:00:36.651 ********* 2026-03-24 02:42:05.233095 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:05.233100 | orchestrator | 2026-03-24 02:42:05.233105 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-24 02:42:05.233110 | orchestrator | Tuesday 24 March 2026 02:42:03 +0000 (0:00:00.505) 0:00:37.156 ********* 2026-03-24 02:42:05.233115 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:05.233120 | orchestrator | 2026-03-24 02:42:05.233125 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-24 02:42:05.233133 | orchestrator | Tuesday 24 March 2026 02:42:03 +0000 (0:00:00.513) 0:00:37.669 ********* 2026-03-24 02:42:05.233142 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:05.233150 | orchestrator | 2026-03-24 02:42:05.233158 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-24 02:42:05.233166 | orchestrator | Tuesday 24 March 2026 02:42:04 +0000 (0:00:00.540) 0:00:38.209 ********* 2026-03-24 02:42:05.233175 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:05.233182 | orchestrator | 2026-03-24 02:42:05.233189 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-24 02:42:05.233196 | orchestrator | Tuesday 24 March 2026 02:42:04 +0000 (0:00:00.138) 0:00:38.348 ********* 2026-03-24 02:42:05.233203 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.233211 | orchestrator | 2026-03-24 02:42:05.233219 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-24 02:42:05.233228 | orchestrator | Tuesday 24 March 2026 02:42:04 +0000 (0:00:00.099) 0:00:38.448 ********* 2026-03-24 02:42:05.233236 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.233245 | orchestrator | 2026-03-24 02:42:05.233253 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-24 02:42:05.233262 | orchestrator | Tuesday 24 March 2026 02:42:04 +0000 (0:00:00.223) 0:00:38.672 ********* 2026-03-24 02:42:05.233270 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 02:42:05.233280 | orchestrator |  "vgs_report": { 2026-03-24 02:42:05.233285 | orchestrator |  "vg": [] 2026-03-24 02:42:05.233290 | orchestrator |  } 2026-03-24 02:42:05.233296 | orchestrator | } 2026-03-24 02:42:05.233301 | orchestrator | 2026-03-24 02:42:05.233306 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-24 02:42:05.233311 | orchestrator | Tuesday 24 March 2026 02:42:04 +0000 (0:00:00.133) 0:00:38.805 ********* 2026-03-24 02:42:05.233316 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.233321 | orchestrator | 2026-03-24 02:42:05.233326 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-24 02:42:05.233331 | orchestrator | Tuesday 24 March 2026 02:42:04 +0000 (0:00:00.126) 0:00:38.931 ********* 2026-03-24 02:42:05.233336 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.233341 | orchestrator | 2026-03-24 02:42:05.233346 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-24 02:42:05.233351 | orchestrator | Tuesday 24 March 2026 02:42:04 +0000 (0:00:00.134) 0:00:39.065 ********* 2026-03-24 02:42:05.233356 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.233361 | orchestrator | 2026-03-24 02:42:05.233366 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-24 02:42:05.233371 | orchestrator | Tuesday 24 March 2026 02:42:05 +0000 (0:00:00.129) 0:00:39.195 ********* 2026-03-24 02:42:05.233376 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:05.233381 | orchestrator | 2026-03-24 02:42:05.233396 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-24 02:42:09.435108 | orchestrator | Tuesday 24 March 2026 02:42:05 +0000 (0:00:00.126) 0:00:39.321 ********* 2026-03-24 02:42:09.435193 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435203 | orchestrator | 2026-03-24 02:42:09.435211 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-24 02:42:09.435217 | orchestrator | Tuesday 24 March 2026 02:42:05 +0000 (0:00:00.117) 0:00:39.438 ********* 2026-03-24 02:42:09.435223 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435229 | orchestrator | 2026-03-24 02:42:09.435235 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-24 02:42:09.435242 | orchestrator | Tuesday 24 March 2026 02:42:05 +0000 (0:00:00.130) 0:00:39.569 ********* 2026-03-24 02:42:09.435248 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435253 | orchestrator | 2026-03-24 02:42:09.435260 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-24 02:42:09.435280 | orchestrator | Tuesday 24 March 2026 02:42:05 +0000 (0:00:00.109) 0:00:39.678 ********* 2026-03-24 02:42:09.435286 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435292 | orchestrator | 2026-03-24 02:42:09.435298 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-24 02:42:09.435304 | orchestrator | Tuesday 24 March 2026 02:42:05 +0000 (0:00:00.128) 0:00:39.807 ********* 2026-03-24 02:42:09.435309 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435315 | orchestrator | 2026-03-24 02:42:09.435321 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-24 02:42:09.435327 | orchestrator | Tuesday 24 March 2026 02:42:05 +0000 (0:00:00.123) 0:00:39.930 ********* 2026-03-24 02:42:09.435333 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435339 | orchestrator | 2026-03-24 02:42:09.435345 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-24 02:42:09.435351 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.247) 0:00:40.177 ********* 2026-03-24 02:42:09.435357 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435363 | orchestrator | 2026-03-24 02:42:09.435368 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-24 02:42:09.435374 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.116) 0:00:40.294 ********* 2026-03-24 02:42:09.435380 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435386 | orchestrator | 2026-03-24 02:42:09.435392 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-24 02:42:09.435397 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.113) 0:00:40.408 ********* 2026-03-24 02:42:09.435403 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435409 | orchestrator | 2026-03-24 02:42:09.435415 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-24 02:42:09.435420 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.139) 0:00:40.547 ********* 2026-03-24 02:42:09.435426 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435432 | orchestrator | 2026-03-24 02:42:09.435438 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-24 02:42:09.435444 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.126) 0:00:40.673 ********* 2026-03-24 02:42:09.435451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435465 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435471 | orchestrator | 2026-03-24 02:42:09.435477 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-24 02:42:09.435483 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.142) 0:00:40.815 ********* 2026-03-24 02:42:09.435507 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435514 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435520 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435527 | orchestrator | 2026-03-24 02:42:09.435533 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-24 02:42:09.435539 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.135) 0:00:40.951 ********* 2026-03-24 02:42:09.435545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435551 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435558 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435564 | orchestrator | 2026-03-24 02:42:09.435571 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-24 02:42:09.435577 | orchestrator | Tuesday 24 March 2026 02:42:06 +0000 (0:00:00.142) 0:00:41.093 ********* 2026-03-24 02:42:09.435646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435658 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435665 | orchestrator | 2026-03-24 02:42:09.435685 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-24 02:42:09.435692 | orchestrator | Tuesday 24 March 2026 02:42:07 +0000 (0:00:00.143) 0:00:41.237 ********* 2026-03-24 02:42:09.435699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435712 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435719 | orchestrator | 2026-03-24 02:42:09.435724 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-24 02:42:09.435729 | orchestrator | Tuesday 24 March 2026 02:42:07 +0000 (0:00:00.142) 0:00:41.379 ********* 2026-03-24 02:42:09.435738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435747 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435752 | orchestrator | 2026-03-24 02:42:09.435756 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-24 02:42:09.435760 | orchestrator | Tuesday 24 March 2026 02:42:07 +0000 (0:00:00.138) 0:00:41.517 ********* 2026-03-24 02:42:09.435765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435773 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435777 | orchestrator | 2026-03-24 02:42:09.435781 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-24 02:42:09.435789 | orchestrator | Tuesday 24 March 2026 02:42:07 +0000 (0:00:00.268) 0:00:41.786 ********* 2026-03-24 02:42:09.435793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435797 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435800 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435804 | orchestrator | 2026-03-24 02:42:09.435808 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-24 02:42:09.435812 | orchestrator | Tuesday 24 March 2026 02:42:07 +0000 (0:00:00.174) 0:00:41.961 ********* 2026-03-24 02:42:09.435815 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:09.435819 | orchestrator | 2026-03-24 02:42:09.435823 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-24 02:42:09.435827 | orchestrator | Tuesday 24 March 2026 02:42:08 +0000 (0:00:00.517) 0:00:42.478 ********* 2026-03-24 02:42:09.435831 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:09.435835 | orchestrator | 2026-03-24 02:42:09.435838 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-24 02:42:09.435842 | orchestrator | Tuesday 24 March 2026 02:42:08 +0000 (0:00:00.503) 0:00:42.982 ********* 2026-03-24 02:42:09.435846 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:09.435850 | orchestrator | 2026-03-24 02:42:09.435853 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-24 02:42:09.435857 | orchestrator | Tuesday 24 March 2026 02:42:09 +0000 (0:00:00.133) 0:00:43.115 ********* 2026-03-24 02:42:09.435861 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'vg_name': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}) 2026-03-24 02:42:09.435866 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'vg_name': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}) 2026-03-24 02:42:09.435870 | orchestrator | 2026-03-24 02:42:09.435873 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-24 02:42:09.435877 | orchestrator | Tuesday 24 March 2026 02:42:09 +0000 (0:00:00.144) 0:00:43.259 ********* 2026-03-24 02:42:09.435881 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:09.435889 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:09.435892 | orchestrator | 2026-03-24 02:42:09.435896 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-24 02:42:09.435900 | orchestrator | Tuesday 24 March 2026 02:42:09 +0000 (0:00:00.140) 0:00:43.399 ********* 2026-03-24 02:42:09.435904 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:09.435910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:14.915021 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:14.915143 | orchestrator | 2026-03-24 02:42:14.915165 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-24 02:42:14.915181 | orchestrator | Tuesday 24 March 2026 02:42:09 +0000 (0:00:00.125) 0:00:43.525 ********* 2026-03-24 02:42:14.915194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 02:42:14.915209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 02:42:14.915251 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:14.915266 | orchestrator | 2026-03-24 02:42:14.915295 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-24 02:42:14.915308 | orchestrator | Tuesday 24 March 2026 02:42:09 +0000 (0:00:00.145) 0:00:43.670 ********* 2026-03-24 02:42:14.915321 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 02:42:14.915333 | orchestrator |  "lvm_report": { 2026-03-24 02:42:14.915347 | orchestrator |  "lv": [ 2026-03-24 02:42:14.915359 | orchestrator |  { 2026-03-24 02:42:14.915372 | orchestrator |  "lv_name": "osd-block-4d735645-9e18-5d04-8028-1696940918c0", 2026-03-24 02:42:14.915408 | orchestrator |  "vg_name": "ceph-4d735645-9e18-5d04-8028-1696940918c0" 2026-03-24 02:42:14.915434 | orchestrator |  }, 2026-03-24 02:42:14.915447 | orchestrator |  { 2026-03-24 02:42:14.915459 | orchestrator |  "lv_name": "osd-block-a329e066-8536-5438-99e1-d9cc3f91f537", 2026-03-24 02:42:14.915471 | orchestrator |  "vg_name": "ceph-a329e066-8536-5438-99e1-d9cc3f91f537" 2026-03-24 02:42:14.915484 | orchestrator |  } 2026-03-24 02:42:14.915497 | orchestrator |  ], 2026-03-24 02:42:14.915510 | orchestrator |  "pv": [ 2026-03-24 02:42:14.915523 | orchestrator |  { 2026-03-24 02:42:14.915536 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-24 02:42:14.915548 | orchestrator |  "vg_name": "ceph-4d735645-9e18-5d04-8028-1696940918c0" 2026-03-24 02:42:14.915561 | orchestrator |  }, 2026-03-24 02:42:14.915575 | orchestrator |  { 2026-03-24 02:42:14.915661 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-24 02:42:14.915675 | orchestrator |  "vg_name": "ceph-a329e066-8536-5438-99e1-d9cc3f91f537" 2026-03-24 02:42:14.915689 | orchestrator |  } 2026-03-24 02:42:14.915702 | orchestrator |  ] 2026-03-24 02:42:14.915715 | orchestrator |  } 2026-03-24 02:42:14.915728 | orchestrator | } 2026-03-24 02:42:14.915741 | orchestrator | 2026-03-24 02:42:14.915754 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-24 02:42:14.915766 | orchestrator | 2026-03-24 02:42:14.915779 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-24 02:42:14.915792 | orchestrator | Tuesday 24 March 2026 02:42:09 +0000 (0:00:00.258) 0:00:43.928 ********* 2026-03-24 02:42:14.915806 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-24 02:42:14.915819 | orchestrator | 2026-03-24 02:42:14.915833 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-24 02:42:14.915846 | orchestrator | Tuesday 24 March 2026 02:42:10 +0000 (0:00:00.491) 0:00:44.420 ********* 2026-03-24 02:42:14.915860 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:14.915873 | orchestrator | 2026-03-24 02:42:14.915885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.915899 | orchestrator | Tuesday 24 March 2026 02:42:10 +0000 (0:00:00.216) 0:00:44.636 ********* 2026-03-24 02:42:14.915913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-24 02:42:14.915926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-24 02:42:14.915938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-24 02:42:14.915951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-24 02:42:14.915963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-24 02:42:14.915976 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-24 02:42:14.915989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-24 02:42:14.916001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-24 02:42:14.916030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-24 02:42:14.916044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-24 02:42:14.916058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-24 02:42:14.916071 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-24 02:42:14.916084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-24 02:42:14.916097 | orchestrator | 2026-03-24 02:42:14.916109 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916122 | orchestrator | Tuesday 24 March 2026 02:42:10 +0000 (0:00:00.366) 0:00:45.003 ********* 2026-03-24 02:42:14.916135 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916149 | orchestrator | 2026-03-24 02:42:14.916161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916173 | orchestrator | Tuesday 24 March 2026 02:42:11 +0000 (0:00:00.188) 0:00:45.191 ********* 2026-03-24 02:42:14.916186 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916198 | orchestrator | 2026-03-24 02:42:14.916211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916251 | orchestrator | Tuesday 24 March 2026 02:42:11 +0000 (0:00:00.170) 0:00:45.362 ********* 2026-03-24 02:42:14.916268 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916281 | orchestrator | 2026-03-24 02:42:14.916296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916309 | orchestrator | Tuesday 24 March 2026 02:42:11 +0000 (0:00:00.178) 0:00:45.541 ********* 2026-03-24 02:42:14.916321 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916336 | orchestrator | 2026-03-24 02:42:14.916349 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916362 | orchestrator | Tuesday 24 March 2026 02:42:11 +0000 (0:00:00.180) 0:00:45.721 ********* 2026-03-24 02:42:14.916375 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916388 | orchestrator | 2026-03-24 02:42:14.916401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916415 | orchestrator | Tuesday 24 March 2026 02:42:11 +0000 (0:00:00.170) 0:00:45.892 ********* 2026-03-24 02:42:14.916428 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916442 | orchestrator | 2026-03-24 02:42:14.916455 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916468 | orchestrator | Tuesday 24 March 2026 02:42:11 +0000 (0:00:00.175) 0:00:46.067 ********* 2026-03-24 02:42:14.916481 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916489 | orchestrator | 2026-03-24 02:42:14.916497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916505 | orchestrator | Tuesday 24 March 2026 02:42:12 +0000 (0:00:00.176) 0:00:46.244 ********* 2026-03-24 02:42:14.916513 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:14.916521 | orchestrator | 2026-03-24 02:42:14.916529 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916537 | orchestrator | Tuesday 24 March 2026 02:42:12 +0000 (0:00:00.450) 0:00:46.694 ********* 2026-03-24 02:42:14.916545 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9) 2026-03-24 02:42:14.916554 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9) 2026-03-24 02:42:14.916562 | orchestrator | 2026-03-24 02:42:14.916570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916578 | orchestrator | Tuesday 24 March 2026 02:42:12 +0000 (0:00:00.399) 0:00:47.094 ********* 2026-03-24 02:42:14.916709 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5) 2026-03-24 02:42:14.916728 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5) 2026-03-24 02:42:14.916745 | orchestrator | 2026-03-24 02:42:14.916754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916762 | orchestrator | Tuesday 24 March 2026 02:42:13 +0000 (0:00:00.386) 0:00:47.480 ********* 2026-03-24 02:42:14.916770 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e) 2026-03-24 02:42:14.916778 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e) 2026-03-24 02:42:14.916786 | orchestrator | 2026-03-24 02:42:14.916804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916812 | orchestrator | Tuesday 24 March 2026 02:42:13 +0000 (0:00:00.409) 0:00:47.890 ********* 2026-03-24 02:42:14.916818 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a) 2026-03-24 02:42:14.916834 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a) 2026-03-24 02:42:14.916841 | orchestrator | 2026-03-24 02:42:14.916848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-24 02:42:14.916855 | orchestrator | Tuesday 24 March 2026 02:42:14 +0000 (0:00:00.420) 0:00:48.310 ********* 2026-03-24 02:42:14.916862 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-24 02:42:14.916868 | orchestrator | 2026-03-24 02:42:14.916875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:14.916882 | orchestrator | Tuesday 24 March 2026 02:42:14 +0000 (0:00:00.325) 0:00:48.635 ********* 2026-03-24 02:42:14.916888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-24 02:42:14.916895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-24 02:42:14.916902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-24 02:42:14.916909 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-24 02:42:14.916916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-24 02:42:14.916922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-24 02:42:14.916929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-24 02:42:14.916936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-24 02:42:14.916942 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-24 02:42:14.916949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-24 02:42:14.916956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-24 02:42:14.916973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-24 02:42:22.968724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-24 02:42:22.968833 | orchestrator | 2026-03-24 02:42:22.968850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.968861 | orchestrator | Tuesday 24 March 2026 02:42:14 +0000 (0:00:00.362) 0:00:48.998 ********* 2026-03-24 02:42:22.968872 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.968882 | orchestrator | 2026-03-24 02:42:22.968893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.968903 | orchestrator | Tuesday 24 March 2026 02:42:15 +0000 (0:00:00.179) 0:00:49.177 ********* 2026-03-24 02:42:22.968913 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.968922 | orchestrator | 2026-03-24 02:42:22.968947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.968978 | orchestrator | Tuesday 24 March 2026 02:42:15 +0000 (0:00:00.192) 0:00:49.369 ********* 2026-03-24 02:42:22.968988 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.968998 | orchestrator | 2026-03-24 02:42:22.969008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969025 | orchestrator | Tuesday 24 March 2026 02:42:15 +0000 (0:00:00.186) 0:00:49.556 ********* 2026-03-24 02:42:22.969037 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969047 | orchestrator | 2026-03-24 02:42:22.969056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969066 | orchestrator | Tuesday 24 March 2026 02:42:15 +0000 (0:00:00.186) 0:00:49.742 ********* 2026-03-24 02:42:22.969075 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969085 | orchestrator | 2026-03-24 02:42:22.969094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969104 | orchestrator | Tuesday 24 March 2026 02:42:16 +0000 (0:00:00.462) 0:00:50.204 ********* 2026-03-24 02:42:22.969113 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969123 | orchestrator | 2026-03-24 02:42:22.969132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969141 | orchestrator | Tuesday 24 March 2026 02:42:16 +0000 (0:00:00.189) 0:00:50.394 ********* 2026-03-24 02:42:22.969153 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969164 | orchestrator | 2026-03-24 02:42:22.969175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969186 | orchestrator | Tuesday 24 March 2026 02:42:16 +0000 (0:00:00.192) 0:00:50.586 ********* 2026-03-24 02:42:22.969197 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969210 | orchestrator | 2026-03-24 02:42:22.969227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969243 | orchestrator | Tuesday 24 March 2026 02:42:16 +0000 (0:00:00.185) 0:00:50.772 ********* 2026-03-24 02:42:22.969260 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-24 02:42:22.969277 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-24 02:42:22.969294 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-24 02:42:22.969310 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-24 02:42:22.969325 | orchestrator | 2026-03-24 02:42:22.969340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969354 | orchestrator | Tuesday 24 March 2026 02:42:17 +0000 (0:00:00.573) 0:00:51.346 ********* 2026-03-24 02:42:22.969369 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969385 | orchestrator | 2026-03-24 02:42:22.969400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969416 | orchestrator | Tuesday 24 March 2026 02:42:17 +0000 (0:00:00.185) 0:00:51.531 ********* 2026-03-24 02:42:22.969433 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969449 | orchestrator | 2026-03-24 02:42:22.969467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969486 | orchestrator | Tuesday 24 March 2026 02:42:17 +0000 (0:00:00.187) 0:00:51.718 ********* 2026-03-24 02:42:22.969505 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969526 | orchestrator | 2026-03-24 02:42:22.969543 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-24 02:42:22.969560 | orchestrator | Tuesday 24 March 2026 02:42:17 +0000 (0:00:00.184) 0:00:51.902 ********* 2026-03-24 02:42:22.969576 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969617 | orchestrator | 2026-03-24 02:42:22.969633 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-24 02:42:22.969647 | orchestrator | Tuesday 24 March 2026 02:42:18 +0000 (0:00:00.194) 0:00:52.097 ********* 2026-03-24 02:42:22.969663 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969679 | orchestrator | 2026-03-24 02:42:22.969695 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-24 02:42:22.969711 | orchestrator | Tuesday 24 March 2026 02:42:18 +0000 (0:00:00.127) 0:00:52.225 ********* 2026-03-24 02:42:22.969741 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7dc39596-c9fc-583d-89f8-392d010fb80f'}}) 2026-03-24 02:42:22.969759 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}}) 2026-03-24 02:42:22.969774 | orchestrator | 2026-03-24 02:42:22.969791 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-24 02:42:22.969807 | orchestrator | Tuesday 24 March 2026 02:42:18 +0000 (0:00:00.183) 0:00:52.408 ********* 2026-03-24 02:42:22.969823 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}) 2026-03-24 02:42:22.969840 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}) 2026-03-24 02:42:22.969855 | orchestrator | 2026-03-24 02:42:22.969871 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-24 02:42:22.969912 | orchestrator | Tuesday 24 March 2026 02:42:20 +0000 (0:00:01.863) 0:00:54.272 ********* 2026-03-24 02:42:22.969944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:22.969963 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:22.969980 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.969995 | orchestrator | 2026-03-24 02:42:22.970012 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-24 02:42:22.970159 | orchestrator | Tuesday 24 March 2026 02:42:20 +0000 (0:00:00.267) 0:00:54.540 ********* 2026-03-24 02:42:22.970182 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}) 2026-03-24 02:42:22.970199 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}) 2026-03-24 02:42:22.970214 | orchestrator | 2026-03-24 02:42:22.970231 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-24 02:42:22.970247 | orchestrator | Tuesday 24 March 2026 02:42:21 +0000 (0:00:01.349) 0:00:55.889 ********* 2026-03-24 02:42:22.970262 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:22.970278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:22.970294 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.970311 | orchestrator | 2026-03-24 02:42:22.970325 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-24 02:42:22.970339 | orchestrator | Tuesday 24 March 2026 02:42:21 +0000 (0:00:00.128) 0:00:56.018 ********* 2026-03-24 02:42:22.970353 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.970367 | orchestrator | 2026-03-24 02:42:22.970382 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-24 02:42:22.970400 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.125) 0:00:56.143 ********* 2026-03-24 02:42:22.970416 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:22.970432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:22.970447 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.970463 | orchestrator | 2026-03-24 02:42:22.970490 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-24 02:42:22.970505 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.120) 0:00:56.263 ********* 2026-03-24 02:42:22.970519 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.970534 | orchestrator | 2026-03-24 02:42:22.970550 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-24 02:42:22.970565 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.119) 0:00:56.383 ********* 2026-03-24 02:42:22.970580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:22.970624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:22.970639 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.970654 | orchestrator | 2026-03-24 02:42:22.970670 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-24 02:42:22.970686 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.139) 0:00:56.523 ********* 2026-03-24 02:42:22.970703 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.970719 | orchestrator | 2026-03-24 02:42:22.970735 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-24 02:42:22.970751 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.129) 0:00:56.652 ********* 2026-03-24 02:42:22.970767 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:22.970782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:22.970792 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:22.970802 | orchestrator | 2026-03-24 02:42:22.970812 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-24 02:42:22.970821 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.138) 0:00:56.791 ********* 2026-03-24 02:42:22.970831 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:22.970841 | orchestrator | 2026-03-24 02:42:22.970850 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-24 02:42:22.970860 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.123) 0:00:56.914 ********* 2026-03-24 02:42:22.970886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:28.477761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:28.478547 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.478568 | orchestrator | 2026-03-24 02:42:28.478576 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-24 02:42:28.478602 | orchestrator | Tuesday 24 March 2026 02:42:22 +0000 (0:00:00.144) 0:00:57.059 ********* 2026-03-24 02:42:28.478621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:28.478627 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:28.478642 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.478648 | orchestrator | 2026-03-24 02:42:28.478654 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-24 02:42:28.478659 | orchestrator | Tuesday 24 March 2026 02:42:23 +0000 (0:00:00.132) 0:00:57.191 ********* 2026-03-24 02:42:28.478671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:28.478740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:28.478747 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.478752 | orchestrator | 2026-03-24 02:42:28.478757 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-24 02:42:28.478761 | orchestrator | Tuesday 24 March 2026 02:42:23 +0000 (0:00:00.259) 0:00:57.450 ********* 2026-03-24 02:42:28.478785 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.478790 | orchestrator | 2026-03-24 02:42:28.478794 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-24 02:42:28.478799 | orchestrator | Tuesday 24 March 2026 02:42:23 +0000 (0:00:00.115) 0:00:57.566 ********* 2026-03-24 02:42:28.478803 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.478808 | orchestrator | 2026-03-24 02:42:28.478813 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-24 02:42:28.478818 | orchestrator | Tuesday 24 March 2026 02:42:23 +0000 (0:00:00.117) 0:00:57.684 ********* 2026-03-24 02:42:28.478822 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.478827 | orchestrator | 2026-03-24 02:42:28.478832 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-24 02:42:28.478836 | orchestrator | Tuesday 24 March 2026 02:42:23 +0000 (0:00:00.105) 0:00:57.789 ********* 2026-03-24 02:42:28.478841 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 02:42:28.478846 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-24 02:42:28.478851 | orchestrator | } 2026-03-24 02:42:28.478856 | orchestrator | 2026-03-24 02:42:28.478860 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-24 02:42:28.478865 | orchestrator | Tuesday 24 March 2026 02:42:23 +0000 (0:00:00.120) 0:00:57.910 ********* 2026-03-24 02:42:28.478869 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 02:42:28.478874 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-24 02:42:28.478879 | orchestrator | } 2026-03-24 02:42:28.478883 | orchestrator | 2026-03-24 02:42:28.478888 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-24 02:42:28.478892 | orchestrator | Tuesday 24 March 2026 02:42:23 +0000 (0:00:00.138) 0:00:58.048 ********* 2026-03-24 02:42:28.478897 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 02:42:28.478901 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-24 02:42:28.478906 | orchestrator | } 2026-03-24 02:42:28.478911 | orchestrator | 2026-03-24 02:42:28.478915 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-24 02:42:28.478920 | orchestrator | Tuesday 24 March 2026 02:42:24 +0000 (0:00:00.131) 0:00:58.180 ********* 2026-03-24 02:42:28.478924 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:28.478929 | orchestrator | 2026-03-24 02:42:28.478933 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-24 02:42:28.478938 | orchestrator | Tuesday 24 March 2026 02:42:24 +0000 (0:00:00.480) 0:00:58.660 ********* 2026-03-24 02:42:28.478942 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:28.478947 | orchestrator | 2026-03-24 02:42:28.478952 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-24 02:42:28.478956 | orchestrator | Tuesday 24 March 2026 02:42:25 +0000 (0:00:00.520) 0:00:59.181 ********* 2026-03-24 02:42:28.478961 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:28.478965 | orchestrator | 2026-03-24 02:42:28.478972 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-24 02:42:28.478979 | orchestrator | Tuesday 24 March 2026 02:42:25 +0000 (0:00:00.511) 0:00:59.693 ********* 2026-03-24 02:42:28.478986 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:28.478993 | orchestrator | 2026-03-24 02:42:28.479000 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-24 02:42:28.479007 | orchestrator | Tuesday 24 March 2026 02:42:25 +0000 (0:00:00.139) 0:00:59.832 ********* 2026-03-24 02:42:28.479021 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479029 | orchestrator | 2026-03-24 02:42:28.479036 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-24 02:42:28.479043 | orchestrator | Tuesday 24 March 2026 02:42:25 +0000 (0:00:00.096) 0:00:59.929 ********* 2026-03-24 02:42:28.479050 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479058 | orchestrator | 2026-03-24 02:42:28.479064 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-24 02:42:28.479071 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.225) 0:01:00.154 ********* 2026-03-24 02:42:28.479079 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 02:42:28.479086 | orchestrator |  "vgs_report": { 2026-03-24 02:42:28.479091 | orchestrator |  "vg": [] 2026-03-24 02:42:28.479111 | orchestrator |  } 2026-03-24 02:42:28.479116 | orchestrator | } 2026-03-24 02:42:28.479121 | orchestrator | 2026-03-24 02:42:28.479126 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-24 02:42:28.479134 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.131) 0:01:00.285 ********* 2026-03-24 02:42:28.479142 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479148 | orchestrator | 2026-03-24 02:42:28.479155 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-24 02:42:28.479162 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.119) 0:01:00.405 ********* 2026-03-24 02:42:28.479169 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479177 | orchestrator | 2026-03-24 02:42:28.479190 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-24 02:42:28.479198 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.125) 0:01:00.530 ********* 2026-03-24 02:42:28.479206 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479214 | orchestrator | 2026-03-24 02:42:28.479221 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-24 02:42:28.479228 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.129) 0:01:00.660 ********* 2026-03-24 02:42:28.479234 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479239 | orchestrator | 2026-03-24 02:42:28.479243 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-24 02:42:28.479248 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.123) 0:01:00.784 ********* 2026-03-24 02:42:28.479252 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479257 | orchestrator | 2026-03-24 02:42:28.479261 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-24 02:42:28.479266 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.122) 0:01:00.906 ********* 2026-03-24 02:42:28.479270 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479275 | orchestrator | 2026-03-24 02:42:28.479279 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-24 02:42:28.479284 | orchestrator | Tuesday 24 March 2026 02:42:26 +0000 (0:00:00.106) 0:01:01.012 ********* 2026-03-24 02:42:28.479288 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479293 | orchestrator | 2026-03-24 02:42:28.479297 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-24 02:42:28.479302 | orchestrator | Tuesday 24 March 2026 02:42:27 +0000 (0:00:00.110) 0:01:01.123 ********* 2026-03-24 02:42:28.479307 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479314 | orchestrator | 2026-03-24 02:42:28.479322 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-24 02:42:28.479330 | orchestrator | Tuesday 24 March 2026 02:42:27 +0000 (0:00:00.124) 0:01:01.248 ********* 2026-03-24 02:42:28.479338 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479346 | orchestrator | 2026-03-24 02:42:28.479353 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-24 02:42:28.479360 | orchestrator | Tuesday 24 March 2026 02:42:27 +0000 (0:00:00.127) 0:01:01.376 ********* 2026-03-24 02:42:28.479368 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479379 | orchestrator | 2026-03-24 02:42:28.479387 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-24 02:42:28.479395 | orchestrator | Tuesday 24 March 2026 02:42:27 +0000 (0:00:00.128) 0:01:01.504 ********* 2026-03-24 02:42:28.479402 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479409 | orchestrator | 2026-03-24 02:42:28.479417 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-24 02:42:28.479425 | orchestrator | Tuesday 24 March 2026 02:42:27 +0000 (0:00:00.258) 0:01:01.762 ********* 2026-03-24 02:42:28.479432 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479439 | orchestrator | 2026-03-24 02:42:28.479447 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-24 02:42:28.479453 | orchestrator | Tuesday 24 March 2026 02:42:27 +0000 (0:00:00.138) 0:01:01.900 ********* 2026-03-24 02:42:28.479458 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479463 | orchestrator | 2026-03-24 02:42:28.479467 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-24 02:42:28.479472 | orchestrator | Tuesday 24 March 2026 02:42:27 +0000 (0:00:00.128) 0:01:02.029 ********* 2026-03-24 02:42:28.479476 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479481 | orchestrator | 2026-03-24 02:42:28.479486 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-24 02:42:28.479490 | orchestrator | Tuesday 24 March 2026 02:42:28 +0000 (0:00:00.122) 0:01:02.151 ********* 2026-03-24 02:42:28.479495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:28.479499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:28.479504 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479509 | orchestrator | 2026-03-24 02:42:28.479513 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-24 02:42:28.479518 | orchestrator | Tuesday 24 March 2026 02:42:28 +0000 (0:00:00.139) 0:01:02.291 ********* 2026-03-24 02:42:28.479522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:28.479527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:28.479531 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:28.479536 | orchestrator | 2026-03-24 02:42:28.479540 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-24 02:42:28.479545 | orchestrator | Tuesday 24 March 2026 02:42:28 +0000 (0:00:00.139) 0:01:02.430 ********* 2026-03-24 02:42:28.479555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178105 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178231 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178249 | orchestrator | 2026-03-24 02:42:31.178262 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-24 02:42:31.178274 | orchestrator | Tuesday 24 March 2026 02:42:28 +0000 (0:00:00.137) 0:01:02.568 ********* 2026-03-24 02:42:31.178302 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178325 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178336 | orchestrator | 2026-03-24 02:42:31.178370 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-24 02:42:31.178381 | orchestrator | Tuesday 24 March 2026 02:42:28 +0000 (0:00:00.125) 0:01:02.694 ********* 2026-03-24 02:42:31.178392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178414 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178425 | orchestrator | 2026-03-24 02:42:31.178436 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-24 02:42:31.178446 | orchestrator | Tuesday 24 March 2026 02:42:28 +0000 (0:00:00.145) 0:01:02.839 ********* 2026-03-24 02:42:31.178457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178466 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178476 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178484 | orchestrator | 2026-03-24 02:42:31.178494 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-24 02:42:31.178504 | orchestrator | Tuesday 24 March 2026 02:42:28 +0000 (0:00:00.132) 0:01:02.972 ********* 2026-03-24 02:42:31.178514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178525 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178536 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178546 | orchestrator | 2026-03-24 02:42:31.178556 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-24 02:42:31.178566 | orchestrator | Tuesday 24 March 2026 02:42:29 +0000 (0:00:00.139) 0:01:03.111 ********* 2026-03-24 02:42:31.178575 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178623 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178633 | orchestrator | 2026-03-24 02:42:31.178643 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-24 02:42:31.178653 | orchestrator | Tuesday 24 March 2026 02:42:29 +0000 (0:00:00.138) 0:01:03.249 ********* 2026-03-24 02:42:31.178663 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:31.178674 | orchestrator | 2026-03-24 02:42:31.178683 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-24 02:42:31.178692 | orchestrator | Tuesday 24 March 2026 02:42:29 +0000 (0:00:00.614) 0:01:03.864 ********* 2026-03-24 02:42:31.178702 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:31.178712 | orchestrator | 2026-03-24 02:42:31.178722 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-24 02:42:31.178732 | orchestrator | Tuesday 24 March 2026 02:42:30 +0000 (0:00:00.529) 0:01:04.394 ********* 2026-03-24 02:42:31.178742 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:31.178752 | orchestrator | 2026-03-24 02:42:31.178761 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-24 02:42:31.178771 | orchestrator | Tuesday 24 March 2026 02:42:30 +0000 (0:00:00.138) 0:01:04.533 ********* 2026-03-24 02:42:31.178781 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'vg_name': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}) 2026-03-24 02:42:31.178804 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'vg_name': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}) 2026-03-24 02:42:31.178815 | orchestrator | 2026-03-24 02:42:31.178825 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-24 02:42:31.178835 | orchestrator | Tuesday 24 March 2026 02:42:30 +0000 (0:00:00.157) 0:01:04.690 ********* 2026-03-24 02:42:31.178869 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178879 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178889 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178899 | orchestrator | 2026-03-24 02:42:31.178927 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-24 02:42:31.178937 | orchestrator | Tuesday 24 March 2026 02:42:30 +0000 (0:00:00.134) 0:01:04.825 ********* 2026-03-24 02:42:31.178947 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.178957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.178967 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.178978 | orchestrator | 2026-03-24 02:42:31.178987 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-24 02:42:31.178997 | orchestrator | Tuesday 24 March 2026 02:42:30 +0000 (0:00:00.145) 0:01:04.971 ********* 2026-03-24 02:42:31.179006 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 02:42:31.179016 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 02:42:31.179025 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:31.179035 | orchestrator | 2026-03-24 02:42:31.179044 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-24 02:42:31.179054 | orchestrator | Tuesday 24 March 2026 02:42:31 +0000 (0:00:00.143) 0:01:05.114 ********* 2026-03-24 02:42:31.179065 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 02:42:31.179075 | orchestrator |  "lvm_report": { 2026-03-24 02:42:31.179085 | orchestrator |  "lv": [ 2026-03-24 02:42:31.179095 | orchestrator |  { 2026-03-24 02:42:31.179107 | orchestrator |  "lv_name": "osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f", 2026-03-24 02:42:31.179119 | orchestrator |  "vg_name": "ceph-7dc39596-c9fc-583d-89f8-392d010fb80f" 2026-03-24 02:42:31.179129 | orchestrator |  }, 2026-03-24 02:42:31.179140 | orchestrator |  { 2026-03-24 02:42:31.179150 | orchestrator |  "lv_name": "osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59", 2026-03-24 02:42:31.179160 | orchestrator |  "vg_name": "ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59" 2026-03-24 02:42:31.179170 | orchestrator |  } 2026-03-24 02:42:31.179180 | orchestrator |  ], 2026-03-24 02:42:31.179190 | orchestrator |  "pv": [ 2026-03-24 02:42:31.179201 | orchestrator |  { 2026-03-24 02:42:31.179211 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-24 02:42:31.179221 | orchestrator |  "vg_name": "ceph-7dc39596-c9fc-583d-89f8-392d010fb80f" 2026-03-24 02:42:31.179230 | orchestrator |  }, 2026-03-24 02:42:31.179239 | orchestrator |  { 2026-03-24 02:42:31.179248 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-24 02:42:31.179259 | orchestrator |  "vg_name": "ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59" 2026-03-24 02:42:31.179284 | orchestrator |  } 2026-03-24 02:42:31.179295 | orchestrator |  ] 2026-03-24 02:42:31.179305 | orchestrator |  } 2026-03-24 02:42:31.179314 | orchestrator | } 2026-03-24 02:42:31.179325 | orchestrator | 2026-03-24 02:42:31.179335 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:42:31.179345 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-24 02:42:31.179356 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-24 02:42:31.179366 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-24 02:42:31.179376 | orchestrator | 2026-03-24 02:42:31.179386 | orchestrator | 2026-03-24 02:42:31.179397 | orchestrator | 2026-03-24 02:42:31.179407 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:42:31.179416 | orchestrator | Tuesday 24 March 2026 02:42:31 +0000 (0:00:00.137) 0:01:05.252 ********* 2026-03-24 02:42:31.179425 | orchestrator | =============================================================================== 2026-03-24 02:42:31.179436 | orchestrator | Create block VGs -------------------------------------------------------- 5.64s 2026-03-24 02:42:31.179446 | orchestrator | Create block LVs -------------------------------------------------------- 4.17s 2026-03-24 02:42:31.179457 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.65s 2026-03-24 02:42:31.179468 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.62s 2026-03-24 02:42:31.179478 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.58s 2026-03-24 02:42:31.179489 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-03-24 02:42:31.179500 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2026-03-24 02:42:31.179510 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2026-03-24 02:42:31.179533 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-03-24 02:42:31.386086 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.95s 2026-03-24 02:42:31.386215 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-24 02:42:31.386232 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-24 02:42:31.386244 | orchestrator | Print LVM report data --------------------------------------------------- 0.65s 2026-03-24 02:42:31.386274 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2026-03-24 02:42:31.386286 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-03-24 02:42:31.386297 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-03-24 02:42:31.386307 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-03-24 02:42:31.386318 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.55s 2026-03-24 02:42:31.386335 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.55s 2026-03-24 02:42:31.386354 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.55s 2026-03-24 02:42:43.326568 | orchestrator | 2026-03-24 02:42:43 | INFO  | Task bb43542b-cca1-4258-a58a-7e8ca9886fea (facts) was prepared for execution. 2026-03-24 02:42:43.326707 | orchestrator | 2026-03-24 02:42:43 | INFO  | It takes a moment until task bb43542b-cca1-4258-a58a-7e8ca9886fea (facts) has been started and output is visible here. 2026-03-24 02:42:56.638971 | orchestrator | 2026-03-24 02:42:56.639091 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-24 02:42:56.639108 | orchestrator | 2026-03-24 02:42:56.639121 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-24 02:42:56.639159 | orchestrator | Tuesday 24 March 2026 02:42:47 +0000 (0:00:00.237) 0:00:00.237 ********* 2026-03-24 02:42:56.639191 | orchestrator | ok: [testbed-manager] 2026-03-24 02:42:56.639215 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:42:56.639227 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:42:56.639237 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:42:56.639253 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:42:56.639273 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:56.639293 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:56.639313 | orchestrator | 2026-03-24 02:42:56.639331 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-24 02:42:56.639352 | orchestrator | Tuesday 24 March 2026 02:42:48 +0000 (0:00:01.015) 0:00:01.252 ********* 2026-03-24 02:42:56.639371 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:42:56.639388 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:42:56.639405 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:42:56.639423 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:42:56.639441 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:42:56.639460 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:56.639479 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:56.639524 | orchestrator | 2026-03-24 02:42:56.639538 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-24 02:42:56.639551 | orchestrator | 2026-03-24 02:42:56.639563 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 02:42:56.639576 | orchestrator | Tuesday 24 March 2026 02:42:49 +0000 (0:00:01.108) 0:00:02.361 ********* 2026-03-24 02:42:56.639614 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:42:56.639635 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:42:56.639655 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:42:56.639674 | orchestrator | ok: [testbed-manager] 2026-03-24 02:42:56.639692 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:42:56.639712 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:42:56.639730 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:42:56.639747 | orchestrator | 2026-03-24 02:42:56.639758 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-24 02:42:56.639769 | orchestrator | 2026-03-24 02:42:56.639780 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-24 02:42:56.639791 | orchestrator | Tuesday 24 March 2026 02:42:55 +0000 (0:00:06.605) 0:00:08.967 ********* 2026-03-24 02:42:56.639802 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:42:56.639813 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:42:56.639824 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:42:56.639834 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:42:56.639845 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:42:56.639855 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:42:56.639866 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:42:56.639877 | orchestrator | 2026-03-24 02:42:56.639887 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:42:56.639899 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:42:56.639914 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:42:56.639933 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:42:56.639952 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:42:56.639971 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:42:56.639990 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:42:56.640024 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:42:56.640042 | orchestrator | 2026-03-24 02:42:56.640060 | orchestrator | 2026-03-24 02:42:56.640079 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:42:56.640097 | orchestrator | Tuesday 24 March 2026 02:42:56 +0000 (0:00:00.474) 0:00:09.442 ********* 2026-03-24 02:42:56.640115 | orchestrator | =============================================================================== 2026-03-24 02:42:56.640151 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.61s 2026-03-24 02:42:56.640163 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.11s 2026-03-24 02:42:56.640174 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2026-03-24 02:42:56.640185 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-24 02:42:58.582624 | orchestrator | 2026-03-24 02:42:58 | INFO  | Task 10e9586e-d616-4760-9231-ca6278ae1d92 (ceph) was prepared for execution. 2026-03-24 02:42:58.582704 | orchestrator | 2026-03-24 02:42:58 | INFO  | It takes a moment until task 10e9586e-d616-4760-9231-ca6278ae1d92 (ceph) has been started and output is visible here. 2026-03-24 02:43:14.034649 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-24 02:43:14.034739 | orchestrator | 2.16.14 2026-03-24 02:43:14.034748 | orchestrator | 2026-03-24 02:43:14.034754 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-24 02:43:14.034761 | orchestrator | 2026-03-24 02:43:14.034766 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 02:43:14.034772 | orchestrator | Tuesday 24 March 2026 02:43:03 +0000 (0:00:00.572) 0:00:00.572 ********* 2026-03-24 02:43:14.034779 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:43:14.034784 | orchestrator | 2026-03-24 02:43:14.034790 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 02:43:14.034795 | orchestrator | Tuesday 24 March 2026 02:43:03 +0000 (0:00:00.921) 0:00:01.494 ********* 2026-03-24 02:43:14.034800 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.034805 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.034810 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.034815 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.034820 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.034825 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.034848 | orchestrator | 2026-03-24 02:43:14.034854 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 02:43:14.034860 | orchestrator | Tuesday 24 March 2026 02:43:05 +0000 (0:00:01.139) 0:00:02.634 ********* 2026-03-24 02:43:14.034865 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.034870 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.034875 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.034880 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.034885 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.034890 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.034895 | orchestrator | 2026-03-24 02:43:14.034900 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 02:43:14.034905 | orchestrator | Tuesday 24 March 2026 02:43:05 +0000 (0:00:00.622) 0:00:03.256 ********* 2026-03-24 02:43:14.034910 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.034915 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.034920 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.034925 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.034930 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.034935 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.034956 | orchestrator | 2026-03-24 02:43:14.034962 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 02:43:14.034967 | orchestrator | Tuesday 24 March 2026 02:43:06 +0000 (0:00:00.866) 0:00:04.123 ********* 2026-03-24 02:43:14.034972 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.034977 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.034982 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.034987 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.034992 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.034997 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.035002 | orchestrator | 2026-03-24 02:43:14.035007 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 02:43:14.035012 | orchestrator | Tuesday 24 March 2026 02:43:07 +0000 (0:00:00.656) 0:00:04.780 ********* 2026-03-24 02:43:14.035017 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.035021 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.035026 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.035031 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.035036 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.035041 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.035046 | orchestrator | 2026-03-24 02:43:14.035051 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 02:43:14.035056 | orchestrator | Tuesday 24 March 2026 02:43:07 +0000 (0:00:00.505) 0:00:05.285 ********* 2026-03-24 02:43:14.035061 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.035066 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.035071 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.035076 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.035081 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.035086 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.035091 | orchestrator | 2026-03-24 02:43:14.035096 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 02:43:14.035101 | orchestrator | Tuesday 24 March 2026 02:43:08 +0000 (0:00:00.650) 0:00:05.936 ********* 2026-03-24 02:43:14.035106 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:14.035112 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:14.035117 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:14.035122 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:14.035127 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:14.035132 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:14.035137 | orchestrator | 2026-03-24 02:43:14.035142 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 02:43:14.035147 | orchestrator | Tuesday 24 March 2026 02:43:08 +0000 (0:00:00.544) 0:00:06.481 ********* 2026-03-24 02:43:14.035152 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.035157 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.035162 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.035167 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.035172 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.035177 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.035182 | orchestrator | 2026-03-24 02:43:14.035198 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 02:43:14.035204 | orchestrator | Tuesday 24 March 2026 02:43:09 +0000 (0:00:00.606) 0:00:07.087 ********* 2026-03-24 02:43:14.035210 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:43:14.035216 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:43:14.035222 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:43:14.035228 | orchestrator | 2026-03-24 02:43:14.035233 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 02:43:14.035239 | orchestrator | Tuesday 24 March 2026 02:43:10 +0000 (0:00:00.579) 0:00:07.667 ********* 2026-03-24 02:43:14.035245 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:14.035251 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:14.035266 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:14.035288 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:14.035297 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:14.035306 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:14.035314 | orchestrator | 2026-03-24 02:43:14.035322 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 02:43:14.035330 | orchestrator | Tuesday 24 March 2026 02:43:10 +0000 (0:00:00.647) 0:00:08.314 ********* 2026-03-24 02:43:14.035337 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:43:14.035345 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:43:14.035353 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:43:14.035361 | orchestrator | 2026-03-24 02:43:14.035369 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 02:43:14.035376 | orchestrator | Tuesday 24 March 2026 02:43:12 +0000 (0:00:02.055) 0:00:10.369 ********* 2026-03-24 02:43:14.035383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 02:43:14.035391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 02:43:14.035399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 02:43:14.035407 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:14.035414 | orchestrator | 2026-03-24 02:43:14.035423 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 02:43:14.035431 | orchestrator | Tuesday 24 March 2026 02:43:13 +0000 (0:00:00.359) 0:00:10.729 ********* 2026-03-24 02:43:14.035441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 02:43:14.035451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 02:43:14.035460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 02:43:14.035468 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:14.035476 | orchestrator | 2026-03-24 02:43:14.035484 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 02:43:14.035492 | orchestrator | Tuesday 24 March 2026 02:43:13 +0000 (0:00:00.553) 0:00:11.282 ********* 2026-03-24 02:43:14.035502 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:14.035513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:14.035521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:14.035538 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:14.035546 | orchestrator | 2026-03-24 02:43:14.035552 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 02:43:14.035557 | orchestrator | Tuesday 24 March 2026 02:43:13 +0000 (0:00:00.135) 0:00:11.418 ********* 2026-03-24 02:43:14.035575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 02:43:11.485111', 'end': '2026-03-24 02:43:11.528126', 'delta': '0:00:00.043015', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 02:43:21.969899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 02:43:12.020122', 'end': '2026-03-24 02:43:12.060250', 'delta': '0:00:00.040128', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 02:43:21.970111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 02:43:12.512104', 'end': '2026-03-24 02:43:12.556733', 'delta': '0:00:00.044629', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 02:43:21.970139 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.970157 | orchestrator | 2026-03-24 02:43:21.970173 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 02:43:21.970191 | orchestrator | Tuesday 24 March 2026 02:43:14 +0000 (0:00:00.165) 0:00:11.584 ********* 2026-03-24 02:43:21.970206 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:21.970221 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:21.970235 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:21.970250 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:21.970265 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:21.970280 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:21.970294 | orchestrator | 2026-03-24 02:43:21.970309 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 02:43:21.970324 | orchestrator | Tuesday 24 March 2026 02:43:14 +0000 (0:00:00.714) 0:00:12.298 ********* 2026-03-24 02:43:21.970339 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:43:21.970353 | orchestrator | 2026-03-24 02:43:21.970368 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 02:43:21.970383 | orchestrator | Tuesday 24 March 2026 02:43:15 +0000 (0:00:00.749) 0:00:13.048 ********* 2026-03-24 02:43:21.970399 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.970413 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.970456 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.970474 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.970490 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.970504 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.970518 | orchestrator | 2026-03-24 02:43:21.970534 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 02:43:21.970550 | orchestrator | Tuesday 24 March 2026 02:43:16 +0000 (0:00:00.646) 0:00:13.694 ********* 2026-03-24 02:43:21.970565 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.970580 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.970595 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.970633 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.970646 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.970660 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.970673 | orchestrator | 2026-03-24 02:43:21.970687 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 02:43:21.970700 | orchestrator | Tuesday 24 March 2026 02:43:17 +0000 (0:00:00.879) 0:00:14.574 ********* 2026-03-24 02:43:21.970714 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.970728 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.970743 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.970757 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.970771 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.970785 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.970799 | orchestrator | 2026-03-24 02:43:21.970813 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 02:43:21.970841 | orchestrator | Tuesday 24 March 2026 02:43:17 +0000 (0:00:00.496) 0:00:15.071 ********* 2026-03-24 02:43:21.970855 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.970869 | orchestrator | 2026-03-24 02:43:21.970883 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 02:43:21.970896 | orchestrator | Tuesday 24 March 2026 02:43:17 +0000 (0:00:00.109) 0:00:15.180 ********* 2026-03-24 02:43:21.970907 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.970921 | orchestrator | 2026-03-24 02:43:21.970934 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 02:43:21.970947 | orchestrator | Tuesday 24 March 2026 02:43:17 +0000 (0:00:00.187) 0:00:15.368 ********* 2026-03-24 02:43:21.970959 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.970972 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.970985 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.970997 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.971010 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.971022 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.971035 | orchestrator | 2026-03-24 02:43:21.971070 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 02:43:21.971084 | orchestrator | Tuesday 24 March 2026 02:43:18 +0000 (0:00:00.612) 0:00:15.980 ********* 2026-03-24 02:43:21.971097 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.971111 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.971125 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.971138 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.971151 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.971164 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.971176 | orchestrator | 2026-03-24 02:43:21.971190 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 02:43:21.971203 | orchestrator | Tuesday 24 March 2026 02:43:18 +0000 (0:00:00.508) 0:00:16.489 ********* 2026-03-24 02:43:21.971217 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.971230 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.971242 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.971255 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.971268 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.971296 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.971310 | orchestrator | 2026-03-24 02:43:21.971323 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 02:43:21.971336 | orchestrator | Tuesday 24 March 2026 02:43:19 +0000 (0:00:00.637) 0:00:17.126 ********* 2026-03-24 02:43:21.971349 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.971362 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.971375 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.971389 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.971402 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.971415 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.971428 | orchestrator | 2026-03-24 02:43:21.971441 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 02:43:21.971454 | orchestrator | Tuesday 24 March 2026 02:43:20 +0000 (0:00:00.533) 0:00:17.660 ********* 2026-03-24 02:43:21.971467 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.971481 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.971494 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.971507 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.971520 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.971534 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.971547 | orchestrator | 2026-03-24 02:43:21.971562 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 02:43:21.971575 | orchestrator | Tuesday 24 March 2026 02:43:20 +0000 (0:00:00.624) 0:00:18.284 ********* 2026-03-24 02:43:21.971589 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.971676 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.971692 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.971706 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.971720 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.971733 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.971747 | orchestrator | 2026-03-24 02:43:21.971762 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 02:43:21.971777 | orchestrator | Tuesday 24 March 2026 02:43:21 +0000 (0:00:00.500) 0:00:18.785 ********* 2026-03-24 02:43:21.971791 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:21.971803 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:21.971815 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:21.971828 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:21.971842 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:21.971855 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:21.971869 | orchestrator | 2026-03-24 02:43:21.971881 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 02:43:21.971894 | orchestrator | Tuesday 24 March 2026 02:43:21 +0000 (0:00:00.632) 0:00:19.417 ********* 2026-03-24 02:43:21.971910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:21.971942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:21.971986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.062319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.062326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.062331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.062339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.062354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.160560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.160744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.160753 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.228216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.228289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.228298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.228307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.228337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.228365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.228374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.228381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.228402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.228409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.228422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.228437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.228450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.481583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.481685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.481693 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:22.481700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481770 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:22.481779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.481789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.481793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.481804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.674483 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:22.674492 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:22.674501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.674515 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:22.674532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.674597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:43:22.873772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.873868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:43:22.873887 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:22.873900 | orchestrator | 2026-03-24 02:43:22.873914 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 02:43:22.873926 | orchestrator | Tuesday 24 March 2026 02:43:22 +0000 (0:00:00.805) 0:00:20.222 ********* 2026-03-24 02:43:22.873938 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.873970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.874004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.874070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.874092 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.874104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.874111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.874118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:22.874149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142482 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142520 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142592 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142731 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142774 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.142795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168746 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.168995 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.169009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.169040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277351 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277363 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277437 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:23.277452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277477 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277485 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277493 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.277509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.429866 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430098 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430176 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430206 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430247 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430260 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:23.430273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430285 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.430362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.564824 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.564928 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.564950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.564988 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565012 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565098 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565127 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565143 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.565182 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675402 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:23.675506 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:23.675527 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675561 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675589 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:23.675597 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675666 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675674 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675681 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675687 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675698 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675715 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675727 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:23.675746 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:32.543323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:43:32.544241 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:32.544279 | orchestrator | 2026-03-24 02:43:32.544293 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 02:43:32.544306 | orchestrator | Tuesday 24 March 2026 02:43:23 +0000 (0:00:01.003) 0:00:21.225 ********* 2026-03-24 02:43:32.544316 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:32.544327 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:32.544338 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:32.544348 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:32.544358 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:32.544369 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:32.544379 | orchestrator | 2026-03-24 02:43:32.544390 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 02:43:32.544400 | orchestrator | Tuesday 24 March 2026 02:43:24 +0000 (0:00:00.911) 0:00:22.137 ********* 2026-03-24 02:43:32.544410 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:32.544420 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:32.544430 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:32.544440 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:32.544450 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:32.544460 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:32.544470 | orchestrator | 2026-03-24 02:43:32.544480 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 02:43:32.544490 | orchestrator | Tuesday 24 March 2026 02:43:25 +0000 (0:00:00.606) 0:00:22.743 ********* 2026-03-24 02:43:32.544501 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.544511 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.544521 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.544531 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:32.544541 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:32.544551 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:32.544561 | orchestrator | 2026-03-24 02:43:32.544571 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 02:43:32.544582 | orchestrator | Tuesday 24 March 2026 02:43:25 +0000 (0:00:00.487) 0:00:23.230 ********* 2026-03-24 02:43:32.544592 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.544602 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.544638 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.544648 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:32.544658 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:32.544667 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:32.544677 | orchestrator | 2026-03-24 02:43:32.544686 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 02:43:32.544695 | orchestrator | Tuesday 24 March 2026 02:43:26 +0000 (0:00:00.612) 0:00:23.843 ********* 2026-03-24 02:43:32.544705 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.544714 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.544723 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.544733 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:32.544742 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:32.544778 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:32.544787 | orchestrator | 2026-03-24 02:43:32.544797 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 02:43:32.544806 | orchestrator | Tuesday 24 March 2026 02:43:26 +0000 (0:00:00.531) 0:00:24.374 ********* 2026-03-24 02:43:32.544816 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.544825 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.544834 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.544844 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:32.544853 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:32.544862 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:32.544872 | orchestrator | 2026-03-24 02:43:32.544881 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 02:43:32.544890 | orchestrator | Tuesday 24 March 2026 02:43:27 +0000 (0:00:00.694) 0:00:25.069 ********* 2026-03-24 02:43:32.544900 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-24 02:43:32.544910 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-24 02:43:32.544920 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-24 02:43:32.544929 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-24 02:43:32.544939 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-24 02:43:32.544948 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-24 02:43:32.544957 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-24 02:43:32.544966 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 02:43:32.544976 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-24 02:43:32.544985 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-24 02:43:32.544994 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 02:43:32.545004 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-24 02:43:32.545013 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 02:43:32.545023 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-24 02:43:32.545032 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 02:43:32.545060 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-24 02:43:32.545070 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-24 02:43:32.545080 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 02:43:32.545089 | orchestrator | 2026-03-24 02:43:32.545109 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 02:43:32.545119 | orchestrator | Tuesday 24 March 2026 02:43:28 +0000 (0:00:01.457) 0:00:26.527 ********* 2026-03-24 02:43:32.545128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 02:43:32.545138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 02:43:32.545147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 02:43:32.545157 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.545166 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 02:43:32.545175 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 02:43:32.545185 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 02:43:32.545194 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.545203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 02:43:32.545214 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 02:43:32.545224 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 02:43:32.545234 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.545245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 02:43:32.545255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 02:43:32.545264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 02:43:32.545283 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:32.545292 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 02:43:32.545302 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 02:43:32.545311 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 02:43:32.545321 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:32.545327 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 02:43:32.545333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 02:43:32.545339 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 02:43:32.545344 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:32.545350 | orchestrator | 2026-03-24 02:43:32.545356 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 02:43:32.545362 | orchestrator | Tuesday 24 March 2026 02:43:29 +0000 (0:00:00.681) 0:00:27.209 ********* 2026-03-24 02:43:32.545368 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:32.545374 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:32.545380 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:32.545386 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:43:32.545392 | orchestrator | 2026-03-24 02:43:32.545399 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 02:43:32.545406 | orchestrator | Tuesday 24 March 2026 02:43:30 +0000 (0:00:00.812) 0:00:28.021 ********* 2026-03-24 02:43:32.545412 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.545417 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.545423 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.545429 | orchestrator | 2026-03-24 02:43:32.545435 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 02:43:32.545441 | orchestrator | Tuesday 24 March 2026 02:43:30 +0000 (0:00:00.289) 0:00:28.310 ********* 2026-03-24 02:43:32.545447 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.545452 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.545458 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.545464 | orchestrator | 2026-03-24 02:43:32.545470 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 02:43:32.545476 | orchestrator | Tuesday 24 March 2026 02:43:31 +0000 (0:00:00.270) 0:00:28.581 ********* 2026-03-24 02:43:32.545482 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.545487 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:32.545493 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:32.545499 | orchestrator | 2026-03-24 02:43:32.545505 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 02:43:32.545510 | orchestrator | Tuesday 24 March 2026 02:43:31 +0000 (0:00:00.278) 0:00:28.859 ********* 2026-03-24 02:43:32.545516 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:32.545522 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:32.545528 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:32.545534 | orchestrator | 2026-03-24 02:43:32.545540 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 02:43:32.545545 | orchestrator | Tuesday 24 March 2026 02:43:31 +0000 (0:00:00.532) 0:00:29.391 ********* 2026-03-24 02:43:32.545551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:43:32.545598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:43:32.545649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:43:32.545659 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.545665 | orchestrator | 2026-03-24 02:43:32.545671 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 02:43:32.545677 | orchestrator | Tuesday 24 March 2026 02:43:32 +0000 (0:00:00.357) 0:00:29.748 ********* 2026-03-24 02:43:32.545683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:43:32.545695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:43:32.545701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:43:32.545707 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:32.545713 | orchestrator | 2026-03-24 02:43:32.545729 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 02:43:48.369395 | orchestrator | Tuesday 24 March 2026 02:43:32 +0000 (0:00:00.346) 0:00:30.095 ********* 2026-03-24 02:43:48.369478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:43:48.369498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:43:48.369503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:43:48.369508 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:48.369513 | orchestrator | 2026-03-24 02:43:48.369519 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 02:43:48.369524 | orchestrator | Tuesday 24 March 2026 02:43:32 +0000 (0:00:00.353) 0:00:30.448 ********* 2026-03-24 02:43:48.369529 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.369535 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.369539 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.369544 | orchestrator | 2026-03-24 02:43:48.369549 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 02:43:48.369554 | orchestrator | Tuesday 24 March 2026 02:43:33 +0000 (0:00:00.288) 0:00:30.737 ********* 2026-03-24 02:43:48.369558 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 02:43:48.369563 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 02:43:48.369568 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 02:43:48.369573 | orchestrator | 2026-03-24 02:43:48.369577 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 02:43:48.369582 | orchestrator | Tuesday 24 March 2026 02:43:33 +0000 (0:00:00.760) 0:00:31.498 ********* 2026-03-24 02:43:48.369587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:43:48.369592 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:43:48.369597 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:43:48.369601 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 02:43:48.369606 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 02:43:48.369611 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 02:43:48.369636 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 02:43:48.369642 | orchestrator | 2026-03-24 02:43:48.369646 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 02:43:48.369651 | orchestrator | Tuesday 24 March 2026 02:43:34 +0000 (0:00:00.685) 0:00:32.184 ********* 2026-03-24 02:43:48.369655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:43:48.369660 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:43:48.369664 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:43:48.369669 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 02:43:48.369674 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 02:43:48.369678 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 02:43:48.369683 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 02:43:48.369687 | orchestrator | 2026-03-24 02:43:48.369692 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 02:43:48.369696 | orchestrator | Tuesday 24 March 2026 02:43:36 +0000 (0:00:01.625) 0:00:33.810 ********* 2026-03-24 02:43:48.369718 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:43:48.369724 | orchestrator | 2026-03-24 02:43:48.369728 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 02:43:48.369733 | orchestrator | Tuesday 24 March 2026 02:43:37 +0000 (0:00:00.985) 0:00:34.795 ********* 2026-03-24 02:43:48.369738 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:43:48.369742 | orchestrator | 2026-03-24 02:43:48.369747 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 02:43:48.369751 | orchestrator | Tuesday 24 March 2026 02:43:38 +0000 (0:00:01.013) 0:00:35.809 ********* 2026-03-24 02:43:48.369756 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:48.369761 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:48.369766 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:48.369772 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:48.369780 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:48.369787 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:48.369795 | orchestrator | 2026-03-24 02:43:48.369802 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 02:43:48.369810 | orchestrator | Tuesday 24 March 2026 02:43:39 +0000 (0:00:01.072) 0:00:36.881 ********* 2026-03-24 02:43:48.369818 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.369827 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.369833 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.369838 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.369842 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.369847 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.369851 | orchestrator | 2026-03-24 02:43:48.369856 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 02:43:48.369860 | orchestrator | Tuesday 24 March 2026 02:43:40 +0000 (0:00:00.682) 0:00:37.564 ********* 2026-03-24 02:43:48.369865 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.369869 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.369874 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.369890 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.369895 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.369899 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.369904 | orchestrator | 2026-03-24 02:43:48.369908 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 02:43:48.369917 | orchestrator | Tuesday 24 March 2026 02:43:40 +0000 (0:00:00.686) 0:00:38.251 ********* 2026-03-24 02:43:48.369921 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.369926 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.369930 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.369935 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.369939 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.369944 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.369948 | orchestrator | 2026-03-24 02:43:48.369954 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 02:43:48.369959 | orchestrator | Tuesday 24 March 2026 02:43:41 +0000 (0:00:00.691) 0:00:38.943 ********* 2026-03-24 02:43:48.369964 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:48.369969 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:48.369974 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:48.369979 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:48.370052 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:48.370058 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:48.370063 | orchestrator | 2026-03-24 02:43:48.370068 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 02:43:48.370074 | orchestrator | Tuesday 24 March 2026 02:43:42 +0000 (0:00:01.092) 0:00:40.035 ********* 2026-03-24 02:43:48.370085 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:48.370090 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:48.370095 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:48.370100 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.370104 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.370109 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.370113 | orchestrator | 2026-03-24 02:43:48.370118 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 02:43:48.370122 | orchestrator | Tuesday 24 March 2026 02:43:42 +0000 (0:00:00.500) 0:00:40.535 ********* 2026-03-24 02:43:48.370127 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:48.370131 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:48.370136 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:48.370140 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.370144 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.370149 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.370153 | orchestrator | 2026-03-24 02:43:48.370158 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 02:43:48.370162 | orchestrator | Tuesday 24 March 2026 02:43:43 +0000 (0:00:00.623) 0:00:41.158 ********* 2026-03-24 02:43:48.370167 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.370171 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.370176 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.370180 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:48.370185 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:48.370189 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:48.370194 | orchestrator | 2026-03-24 02:43:48.370198 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 02:43:48.370203 | orchestrator | Tuesday 24 March 2026 02:43:44 +0000 (0:00:00.925) 0:00:42.084 ********* 2026-03-24 02:43:48.370207 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.370212 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.370216 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.370221 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:48.370225 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:48.370229 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:48.370234 | orchestrator | 2026-03-24 02:43:48.370238 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 02:43:48.370243 | orchestrator | Tuesday 24 March 2026 02:43:45 +0000 (0:00:01.157) 0:00:43.242 ********* 2026-03-24 02:43:48.370247 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:48.370252 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:48.370256 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:48.370261 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.370265 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.370270 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.370274 | orchestrator | 2026-03-24 02:43:48.370279 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 02:43:48.370283 | orchestrator | Tuesday 24 March 2026 02:43:46 +0000 (0:00:00.502) 0:00:43.744 ********* 2026-03-24 02:43:48.370288 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:43:48.370292 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:43:48.370297 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:43:48.370301 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:43:48.370305 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:43:48.370310 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:43:48.370315 | orchestrator | 2026-03-24 02:43:48.370319 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 02:43:48.370324 | orchestrator | Tuesday 24 March 2026 02:43:46 +0000 (0:00:00.663) 0:00:44.408 ********* 2026-03-24 02:43:48.370328 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.370333 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.370337 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.370342 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.370350 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.370354 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.370359 | orchestrator | 2026-03-24 02:43:48.370363 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 02:43:48.370368 | orchestrator | Tuesday 24 March 2026 02:43:47 +0000 (0:00:00.517) 0:00:44.925 ********* 2026-03-24 02:43:48.370372 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.370377 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.370381 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:43:48.370385 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:43:48.370390 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:43:48.370395 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:43:48.370399 | orchestrator | 2026-03-24 02:43:48.370403 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 02:43:48.370408 | orchestrator | Tuesday 24 March 2026 02:43:48 +0000 (0:00:00.651) 0:00:45.577 ********* 2026-03-24 02:43:48.370412 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:43:48.370417 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:43:48.370426 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:44:55.381145 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.381239 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.381250 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.381258 | orchestrator | 2026-03-24 02:44:55.381265 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 02:44:55.381285 | orchestrator | Tuesday 24 March 2026 02:43:48 +0000 (0:00:00.530) 0:00:46.107 ********* 2026-03-24 02:44:55.381304 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.381318 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.381324 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.381331 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.381337 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.381343 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.381349 | orchestrator | 2026-03-24 02:44:55.381356 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 02:44:55.381362 | orchestrator | Tuesday 24 March 2026 02:43:49 +0000 (0:00:00.637) 0:00:46.745 ********* 2026-03-24 02:44:55.381369 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.381375 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.381382 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.381388 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.381394 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.381400 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.381411 | orchestrator | 2026-03-24 02:44:55.381421 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 02:44:55.381432 | orchestrator | Tuesday 24 March 2026 02:43:49 +0000 (0:00:00.513) 0:00:47.258 ********* 2026-03-24 02:44:55.381442 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.381452 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.381462 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.381473 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:44:55.381484 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:44:55.381494 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:44:55.381504 | orchestrator | 2026-03-24 02:44:55.381515 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 02:44:55.381526 | orchestrator | Tuesday 24 March 2026 02:43:50 +0000 (0:00:00.663) 0:00:47.921 ********* 2026-03-24 02:44:55.381535 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:44:55.381545 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:44:55.381555 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:44:55.381566 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:44:55.381576 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:44:55.381588 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:44:55.381594 | orchestrator | 2026-03-24 02:44:55.381601 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 02:44:55.381625 | orchestrator | Tuesday 24 March 2026 02:43:50 +0000 (0:00:00.540) 0:00:48.462 ********* 2026-03-24 02:44:55.381632 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:44:55.381638 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:44:55.381644 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:44:55.381650 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:44:55.381713 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:44:55.381724 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:44:55.381731 | orchestrator | 2026-03-24 02:44:55.381739 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 02:44:55.381746 | orchestrator | Tuesday 24 March 2026 02:43:51 +0000 (0:00:01.073) 0:00:49.536 ********* 2026-03-24 02:44:55.381753 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:44:55.381760 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:44:55.381767 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:44:55.381774 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:44:55.381782 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:44:55.381788 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:44:55.381795 | orchestrator | 2026-03-24 02:44:55.381802 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 02:44:55.381809 | orchestrator | Tuesday 24 March 2026 02:43:53 +0000 (0:00:01.517) 0:00:51.053 ********* 2026-03-24 02:44:55.381816 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:44:55.381823 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:44:55.381830 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:44:55.381837 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:44:55.381844 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:44:55.381850 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:44:55.381857 | orchestrator | 2026-03-24 02:44:55.381864 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 02:44:55.381871 | orchestrator | Tuesday 24 March 2026 02:43:55 +0000 (0:00:02.091) 0:00:53.144 ********* 2026-03-24 02:44:55.381879 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:44:55.381888 | orchestrator | 2026-03-24 02:44:55.381895 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 02:44:55.381902 | orchestrator | Tuesday 24 March 2026 02:43:56 +0000 (0:00:01.130) 0:00:54.274 ********* 2026-03-24 02:44:55.381909 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.381916 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.381923 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.381930 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.381936 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.381943 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.381950 | orchestrator | 2026-03-24 02:44:55.381957 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 02:44:55.381964 | orchestrator | Tuesday 24 March 2026 02:43:57 +0000 (0:00:00.515) 0:00:54.790 ********* 2026-03-24 02:44:55.381971 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.381977 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.381984 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.381991 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.381998 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.382006 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.382013 | orchestrator | 2026-03-24 02:44:55.382064 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 02:44:55.382072 | orchestrator | Tuesday 24 March 2026 02:43:57 +0000 (0:00:00.615) 0:00:55.405 ********* 2026-03-24 02:44:55.382096 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 02:44:55.382103 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 02:44:55.382109 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 02:44:55.382128 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 02:44:55.382135 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 02:44:55.382141 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 02:44:55.382148 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 02:44:55.382154 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 02:44:55.382160 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 02:44:55.382166 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 02:44:55.382172 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 02:44:55.382178 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 02:44:55.382184 | orchestrator | 2026-03-24 02:44:55.382191 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 02:44:55.382197 | orchestrator | Tuesday 24 March 2026 02:43:59 +0000 (0:00:01.285) 0:00:56.690 ********* 2026-03-24 02:44:55.382203 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:44:55.382209 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:44:55.382215 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:44:55.382221 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:44:55.382227 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:44:55.382233 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:44:55.382239 | orchestrator | 2026-03-24 02:44:55.382245 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 02:44:55.382251 | orchestrator | Tuesday 24 March 2026 02:44:00 +0000 (0:00:01.006) 0:00:57.697 ********* 2026-03-24 02:44:55.382257 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.382263 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.382269 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.382275 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.382281 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.382287 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.382293 | orchestrator | 2026-03-24 02:44:55.382299 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 02:44:55.382305 | orchestrator | Tuesday 24 March 2026 02:44:00 +0000 (0:00:00.536) 0:00:58.233 ********* 2026-03-24 02:44:55.382312 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.382318 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.382324 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.382330 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.382336 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.382342 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.382348 | orchestrator | 2026-03-24 02:44:55.382354 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 02:44:55.382360 | orchestrator | Tuesday 24 March 2026 02:44:01 +0000 (0:00:00.626) 0:00:58.859 ********* 2026-03-24 02:44:55.382366 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.382372 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.382378 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:44:55.382384 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:44:55.382390 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:44:55.382396 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:44:55.382402 | orchestrator | 2026-03-24 02:44:55.382408 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 02:44:55.382415 | orchestrator | Tuesday 24 March 2026 02:44:01 +0000 (0:00:00.513) 0:00:59.373 ********* 2026-03-24 02:44:55.382421 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:44:55.382432 | orchestrator | 2026-03-24 02:44:55.382438 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 02:44:55.382444 | orchestrator | Tuesday 24 March 2026 02:44:02 +0000 (0:00:01.018) 0:01:00.392 ********* 2026-03-24 02:44:55.382450 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:44:55.382456 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:44:55.382463 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:44:55.382469 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:44:55.382475 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:44:55.382481 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:44:55.382487 | orchestrator | 2026-03-24 02:44:55.382493 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 02:44:55.382499 | orchestrator | Tuesday 24 March 2026 02:44:55 +0000 (0:00:52.232) 0:01:52.624 ********* 2026-03-24 02:44:55.382505 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 02:44:55.382511 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 02:44:55.382517 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 02:44:55.382523 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:44:55.382530 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 02:44:55.382536 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 02:44:55.382545 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 02:44:55.382556 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:44:55.382567 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 02:44:55.382584 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 02:45:17.589527 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 02:45:17.589663 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.589780 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 02:45:17.589804 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 02:45:17.589819 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 02:45:17.589833 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.589847 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 02:45:17.589861 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 02:45:17.589879 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 02:45:17.589900 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.589914 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 02:45:17.589928 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 02:45:17.589942 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 02:45:17.589956 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.589971 | orchestrator | 2026-03-24 02:45:17.589992 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 02:45:17.590011 | orchestrator | Tuesday 24 March 2026 02:44:55 +0000 (0:00:00.649) 0:01:53.273 ********* 2026-03-24 02:45:17.590104 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.590123 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.590137 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.590152 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.590166 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.590181 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.590194 | orchestrator | 2026-03-24 02:45:17.590307 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 02:45:17.590360 | orchestrator | Tuesday 24 March 2026 02:44:56 +0000 (0:00:00.738) 0:01:54.011 ********* 2026-03-24 02:45:17.590377 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.590392 | orchestrator | 2026-03-24 02:45:17.590407 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 02:45:17.590416 | orchestrator | Tuesday 24 March 2026 02:44:56 +0000 (0:00:00.142) 0:01:54.154 ********* 2026-03-24 02:45:17.590425 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.590433 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.590442 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.590451 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.590459 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.590468 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.590476 | orchestrator | 2026-03-24 02:45:17.590485 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 02:45:17.590493 | orchestrator | Tuesday 24 March 2026 02:44:57 +0000 (0:00:00.571) 0:01:54.725 ********* 2026-03-24 02:45:17.590502 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.590510 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.590519 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.590527 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.590536 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.590544 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.590552 | orchestrator | 2026-03-24 02:45:17.590561 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 02:45:17.590569 | orchestrator | Tuesday 24 March 2026 02:44:57 +0000 (0:00:00.774) 0:01:55.499 ********* 2026-03-24 02:45:17.590578 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.590586 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.590595 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.590603 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.590612 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.590621 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.590629 | orchestrator | 2026-03-24 02:45:17.590638 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 02:45:17.590646 | orchestrator | Tuesday 24 March 2026 02:44:58 +0000 (0:00:00.600) 0:01:56.099 ********* 2026-03-24 02:45:17.590655 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:45:17.590665 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:17.590702 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:17.590717 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:17.590726 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:45:17.590735 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:45:17.590743 | orchestrator | 2026-03-24 02:45:17.590752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 02:45:17.590762 | orchestrator | Tuesday 24 March 2026 02:45:02 +0000 (0:00:03.691) 0:01:59.791 ********* 2026-03-24 02:45:17.590770 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:17.590779 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:17.590787 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:17.590796 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:45:17.590804 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:45:17.590813 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:45:17.590821 | orchestrator | 2026-03-24 02:45:17.590830 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 02:45:17.590839 | orchestrator | Tuesday 24 March 2026 02:45:02 +0000 (0:00:00.558) 0:02:00.349 ********* 2026-03-24 02:45:17.590849 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:45:17.590859 | orchestrator | 2026-03-24 02:45:17.590868 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 02:45:17.590877 | orchestrator | Tuesday 24 March 2026 02:45:03 +0000 (0:00:01.150) 0:02:01.500 ********* 2026-03-24 02:45:17.590894 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.590903 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.590911 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.590943 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.590953 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.590968 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.590983 | orchestrator | 2026-03-24 02:45:17.591007 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 02:45:17.591021 | orchestrator | Tuesday 24 March 2026 02:45:04 +0000 (0:00:00.750) 0:02:02.250 ********* 2026-03-24 02:45:17.591035 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.591049 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.591061 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.591075 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.591089 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.591103 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.591116 | orchestrator | 2026-03-24 02:45:17.591131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 02:45:17.591147 | orchestrator | Tuesday 24 March 2026 02:45:05 +0000 (0:00:00.566) 0:02:02.817 ********* 2026-03-24 02:45:17.591161 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.591177 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.591187 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.591196 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.591204 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.591213 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.591221 | orchestrator | 2026-03-24 02:45:17.591230 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 02:45:17.591239 | orchestrator | Tuesday 24 March 2026 02:45:06 +0000 (0:00:00.779) 0:02:03.597 ********* 2026-03-24 02:45:17.591247 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.591256 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.591264 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.591273 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.591281 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.591290 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.591298 | orchestrator | 2026-03-24 02:45:17.591307 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 02:45:17.591316 | orchestrator | Tuesday 24 March 2026 02:45:06 +0000 (0:00:00.561) 0:02:04.158 ********* 2026-03-24 02:45:17.591324 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.591333 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.591341 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.591350 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.591358 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.591366 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.591375 | orchestrator | 2026-03-24 02:45:17.591384 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 02:45:17.591392 | orchestrator | Tuesday 24 March 2026 02:45:07 +0000 (0:00:00.662) 0:02:04.821 ********* 2026-03-24 02:45:17.591401 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.591409 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.591418 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.591427 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.591435 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.591443 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.591452 | orchestrator | 2026-03-24 02:45:17.591461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 02:45:17.591469 | orchestrator | Tuesday 24 March 2026 02:45:07 +0000 (0:00:00.496) 0:02:05.318 ********* 2026-03-24 02:45:17.591478 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.591486 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.591503 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.591511 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.591520 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.591529 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.591537 | orchestrator | 2026-03-24 02:45:17.591546 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 02:45:17.591555 | orchestrator | Tuesday 24 March 2026 02:45:08 +0000 (0:00:00.660) 0:02:05.979 ********* 2026-03-24 02:45:17.591563 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:17.591572 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:17.591581 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:17.591589 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:17.591598 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:17.591606 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:17.591615 | orchestrator | 2026-03-24 02:45:17.591624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 02:45:17.591632 | orchestrator | Tuesday 24 March 2026 02:45:08 +0000 (0:00:00.536) 0:02:06.516 ********* 2026-03-24 02:45:17.591641 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:17.591650 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:17.591658 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:17.591667 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:45:17.591700 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:45:17.591709 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:45:17.591717 | orchestrator | 2026-03-24 02:45:17.591726 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 02:45:17.591735 | orchestrator | Tuesday 24 March 2026 02:45:10 +0000 (0:00:01.054) 0:02:07.571 ********* 2026-03-24 02:45:17.591745 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:45:17.591755 | orchestrator | 2026-03-24 02:45:17.591764 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 02:45:17.591773 | orchestrator | Tuesday 24 March 2026 02:45:11 +0000 (0:00:01.060) 0:02:08.631 ********* 2026-03-24 02:45:17.591782 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-24 02:45:17.591791 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-24 02:45:17.591800 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-24 02:45:17.591808 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-24 02:45:17.591817 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-24 02:45:17.591826 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-24 02:45:17.591834 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-24 02:45:17.591852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-24 02:45:20.929591 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-24 02:45:20.929749 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-24 02:45:20.929760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-24 02:45:20.929765 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-24 02:45:20.929769 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-24 02:45:20.929773 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-24 02:45:20.929777 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-24 02:45:20.929781 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-24 02:45:20.929785 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-24 02:45:20.929789 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-24 02:45:20.929793 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-24 02:45:20.929797 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-24 02:45:20.929800 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-24 02:45:20.929820 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-24 02:45:20.929824 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-24 02:45:20.929828 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-24 02:45:20.929832 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-24 02:45:20.929836 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-24 02:45:20.929840 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-24 02:45:20.929843 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-24 02:45:20.929847 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-24 02:45:20.929852 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-24 02:45:20.929858 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-24 02:45:20.929865 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-24 02:45:20.929870 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-24 02:45:20.929877 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-24 02:45:20.929882 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-24 02:45:20.929887 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-24 02:45:20.929892 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-24 02:45:20.929897 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-24 02:45:20.929907 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-24 02:45:20.929914 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-24 02:45:20.929919 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-24 02:45:20.929925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-24 02:45:20.929931 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-24 02:45:20.929938 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-24 02:45:20.929943 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 02:45:20.929950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-24 02:45:20.929956 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-24 02:45:20.929962 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-24 02:45:20.929968 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-24 02:45:20.929974 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 02:45:20.929980 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 02:45:20.929984 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 02:45:20.929987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 02:45:20.929991 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 02:45:20.929995 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 02:45:20.929999 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 02:45:20.930003 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 02:45:20.930007 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 02:45:20.930011 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 02:45:20.930049 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 02:45:20.930054 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 02:45:20.930058 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 02:45:20.930067 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 02:45:20.930071 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 02:45:20.930075 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 02:45:20.930079 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 02:45:20.930083 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 02:45:20.930098 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 02:45:20.930106 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 02:45:20.930110 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 02:45:20.930114 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 02:45:20.930118 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 02:45:20.930121 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 02:45:20.930125 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 02:45:20.930129 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 02:45:20.930133 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 02:45:20.930137 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 02:45:20.930140 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 02:45:20.930144 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 02:45:20.930148 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 02:45:20.930152 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-24 02:45:20.930156 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 02:45:20.930160 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 02:45:20.930163 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 02:45:20.930167 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 02:45:20.930171 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-24 02:45:20.930175 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-24 02:45:20.930178 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-24 02:45:20.930182 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-24 02:45:20.930186 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-24 02:45:20.930190 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-24 02:45:20.930193 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-24 02:45:20.930197 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-24 02:45:20.930201 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-24 02:45:20.930205 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-24 02:45:20.930208 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-24 02:45:20.930212 | orchestrator | 2026-03-24 02:45:20.930217 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 02:45:20.930221 | orchestrator | Tuesday 24 March 2026 02:45:17 +0000 (0:00:06.506) 0:02:15.138 ********* 2026-03-24 02:45:20.930225 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:20.930229 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:20.930233 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:20.930237 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:45:20.930241 | orchestrator | 2026-03-24 02:45:20.930245 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 02:45:20.930252 | orchestrator | Tuesday 24 March 2026 02:45:18 +0000 (0:00:00.828) 0:02:15.966 ********* 2026-03-24 02:45:20.930256 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:20.930261 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:20.930264 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:20.930268 | orchestrator | 2026-03-24 02:45:20.930272 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 02:45:20.930276 | orchestrator | Tuesday 24 March 2026 02:45:19 +0000 (0:00:00.634) 0:02:16.601 ********* 2026-03-24 02:45:20.930280 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:20.930284 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:20.930288 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:20.930291 | orchestrator | 2026-03-24 02:45:20.930295 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 02:45:20.930299 | orchestrator | Tuesday 24 March 2026 02:45:20 +0000 (0:00:01.250) 0:02:17.851 ********* 2026-03-24 02:45:20.930303 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:20.930306 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:20.930310 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:20.930314 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:20.930318 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:20.930321 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:20.930325 | orchestrator | 2026-03-24 02:45:20.930329 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 02:45:20.930335 | orchestrator | Tuesday 24 March 2026 02:45:20 +0000 (0:00:00.629) 0:02:18.480 ********* 2026-03-24 02:45:33.498601 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:33.498899 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:33.498927 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:33.498945 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.498964 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.498981 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.498998 | orchestrator | 2026-03-24 02:45:33.499017 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 02:45:33.499034 | orchestrator | Tuesday 24 March 2026 02:45:21 +0000 (0:00:00.507) 0:02:18.988 ********* 2026-03-24 02:45:33.499050 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.499067 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.499085 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.499102 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.499119 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.499136 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.499154 | orchestrator | 2026-03-24 02:45:33.499172 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 02:45:33.499188 | orchestrator | Tuesday 24 March 2026 02:45:22 +0000 (0:00:00.629) 0:02:19.618 ********* 2026-03-24 02:45:33.499205 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.499222 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.499237 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.499253 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.499270 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.499288 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.499304 | orchestrator | 2026-03-24 02:45:33.499321 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 02:45:33.499365 | orchestrator | Tuesday 24 March 2026 02:45:22 +0000 (0:00:00.491) 0:02:20.109 ********* 2026-03-24 02:45:33.499384 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.499400 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.499417 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.499433 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.499450 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.499467 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.499485 | orchestrator | 2026-03-24 02:45:33.499503 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 02:45:33.499521 | orchestrator | Tuesday 24 March 2026 02:45:23 +0000 (0:00:00.697) 0:02:20.807 ********* 2026-03-24 02:45:33.499537 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.499555 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.499572 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.499588 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.499604 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.499614 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.499623 | orchestrator | 2026-03-24 02:45:33.499633 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 02:45:33.499643 | orchestrator | Tuesday 24 March 2026 02:45:23 +0000 (0:00:00.589) 0:02:21.396 ********* 2026-03-24 02:45:33.499653 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.499662 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.499671 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.499703 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.499714 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.499723 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.499733 | orchestrator | 2026-03-24 02:45:33.499742 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 02:45:33.499753 | orchestrator | Tuesday 24 March 2026 02:45:24 +0000 (0:00:00.872) 0:02:22.269 ********* 2026-03-24 02:45:33.499762 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.499772 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.499781 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.499791 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.499800 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.499810 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.499819 | orchestrator | 2026-03-24 02:45:33.499829 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 02:45:33.499838 | orchestrator | Tuesday 24 March 2026 02:45:25 +0000 (0:00:00.561) 0:02:22.831 ********* 2026-03-24 02:45:33.499851 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.499867 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.499882 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.499898 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:33.499913 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:33.499929 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:33.499947 | orchestrator | 2026-03-24 02:45:33.499964 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 02:45:33.499979 | orchestrator | Tuesday 24 March 2026 02:45:28 +0000 (0:00:03.042) 0:02:25.874 ********* 2026-03-24 02:45:33.499996 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:33.500007 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:33.500017 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:33.500026 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.500036 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.500045 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.500055 | orchestrator | 2026-03-24 02:45:33.500064 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 02:45:33.500074 | orchestrator | Tuesday 24 March 2026 02:45:28 +0000 (0:00:00.619) 0:02:26.493 ********* 2026-03-24 02:45:33.500095 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:33.500104 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:33.500114 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:33.500128 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.500144 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.500161 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.500177 | orchestrator | 2026-03-24 02:45:33.500194 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 02:45:33.500209 | orchestrator | Tuesday 24 March 2026 02:45:29 +0000 (0:00:00.811) 0:02:27.305 ********* 2026-03-24 02:45:33.500225 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.500235 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.500250 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.500267 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.500310 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.500332 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.500342 | orchestrator | 2026-03-24 02:45:33.500352 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 02:45:33.500362 | orchestrator | Tuesday 24 March 2026 02:45:30 +0000 (0:00:00.575) 0:02:27.881 ********* 2026-03-24 02:45:33.500372 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:33.500384 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:33.500395 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 02:45:33.500412 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.500428 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.500446 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.500461 | orchestrator | 2026-03-24 02:45:33.500477 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 02:45:33.500487 | orchestrator | Tuesday 24 March 2026 02:45:31 +0000 (0:00:00.780) 0:02:28.661 ********* 2026-03-24 02:45:33.500500 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-24 02:45:33.500514 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-24 02:45:33.500525 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.500535 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-24 02:45:33.500545 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-24 02:45:33.500555 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.500565 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-24 02:45:33.500584 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-24 02:45:33.500594 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.500603 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.500613 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.500622 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.500632 | orchestrator | 2026-03-24 02:45:33.500642 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 02:45:33.500652 | orchestrator | Tuesday 24 March 2026 02:45:31 +0000 (0:00:00.603) 0:02:29.265 ********* 2026-03-24 02:45:33.500661 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.500671 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.500745 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.500755 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.500765 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.500774 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.500784 | orchestrator | 2026-03-24 02:45:33.500794 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 02:45:33.500803 | orchestrator | Tuesday 24 March 2026 02:45:32 +0000 (0:00:00.757) 0:02:30.023 ********* 2026-03-24 02:45:33.500813 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.500823 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:33.500840 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:33.500858 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:33.500876 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:33.500893 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:33.500908 | orchestrator | 2026-03-24 02:45:33.500918 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 02:45:33.500928 | orchestrator | Tuesday 24 March 2026 02:45:32 +0000 (0:00:00.528) 0:02:30.552 ********* 2026-03-24 02:45:33.500942 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:33.500968 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:49.922226 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:49.922302 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.922309 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:49.922313 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:49.922317 | orchestrator | 2026-03-24 02:45:49.922323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 02:45:49.922328 | orchestrator | Tuesday 24 March 2026 02:45:33 +0000 (0:00:00.822) 0:02:31.374 ********* 2026-03-24 02:45:49.922333 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.922337 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:49.922341 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:49.922344 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.922348 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:49.922352 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:49.922356 | orchestrator | 2026-03-24 02:45:49.922360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 02:45:49.922364 | orchestrator | Tuesday 24 March 2026 02:45:34 +0000 (0:00:00.724) 0:02:32.098 ********* 2026-03-24 02:45:49.922368 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.922372 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:49.922376 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:49.922380 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.922384 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:49.922387 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:49.922391 | orchestrator | 2026-03-24 02:45:49.922395 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 02:45:49.922414 | orchestrator | Tuesday 24 March 2026 02:45:35 +0000 (0:00:00.597) 0:02:32.696 ********* 2026-03-24 02:45:49.922419 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:49.922423 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:49.922427 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.922431 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:49.922435 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:49.922439 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:49.922443 | orchestrator | 2026-03-24 02:45:49.922447 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 02:45:49.922451 | orchestrator | Tuesday 24 March 2026 02:45:35 +0000 (0:00:00.749) 0:02:33.445 ********* 2026-03-24 02:45:49.922454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:45:49.922459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:45:49.922463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:45:49.922466 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.922471 | orchestrator | 2026-03-24 02:45:49.922475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 02:45:49.922479 | orchestrator | Tuesday 24 March 2026 02:45:36 +0000 (0:00:00.403) 0:02:33.848 ********* 2026-03-24 02:45:49.922483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:45:49.922486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:45:49.922490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:45:49.922494 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.922498 | orchestrator | 2026-03-24 02:45:49.922502 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 02:45:49.922506 | orchestrator | Tuesday 24 March 2026 02:45:36 +0000 (0:00:00.401) 0:02:34.249 ********* 2026-03-24 02:45:49.922510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:45:49.922514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:45:49.922526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:45:49.922530 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.922534 | orchestrator | 2026-03-24 02:45:49.922538 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 02:45:49.922542 | orchestrator | Tuesday 24 March 2026 02:45:37 +0000 (0:00:00.394) 0:02:34.644 ********* 2026-03-24 02:45:49.922545 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:45:49.922549 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:45:49.922553 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:45:49.922557 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.922561 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:49.922565 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:49.922569 | orchestrator | 2026-03-24 02:45:49.922573 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 02:45:49.922576 | orchestrator | Tuesday 24 March 2026 02:45:37 +0000 (0:00:00.577) 0:02:35.222 ********* 2026-03-24 02:45:49.922581 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 02:45:49.922585 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 02:45:49.922588 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 02:45:49.922592 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-24 02:45:49.922596 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.922600 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-24 02:45:49.922604 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:49.922608 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-24 02:45:49.922611 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:49.922615 | orchestrator | 2026-03-24 02:45:49.922619 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 02:45:49.922623 | orchestrator | Tuesday 24 March 2026 02:45:39 +0000 (0:00:01.664) 0:02:36.886 ********* 2026-03-24 02:45:49.922627 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:45:49.922636 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:45:49.922640 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:45:49.922644 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:45:49.922647 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:45:49.922651 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:45:49.922655 | orchestrator | 2026-03-24 02:45:49.922659 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 02:45:49.922663 | orchestrator | Tuesday 24 March 2026 02:45:41 +0000 (0:00:02.490) 0:02:39.377 ********* 2026-03-24 02:45:49.922668 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:45:49.922674 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:45:49.922681 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:45:49.922741 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:45:49.922757 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:45:49.922763 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:45:49.922770 | orchestrator | 2026-03-24 02:45:49.922776 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 02:45:49.922782 | orchestrator | Tuesday 24 March 2026 02:45:42 +0000 (0:00:00.964) 0:02:40.342 ********* 2026-03-24 02:45:49.922788 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.922795 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:49.922801 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:49.922809 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:45:49.922815 | orchestrator | 2026-03-24 02:45:49.922822 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-24 02:45:49.922829 | orchestrator | Tuesday 24 March 2026 02:45:43 +0000 (0:00:01.013) 0:02:41.355 ********* 2026-03-24 02:45:49.922835 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:45:49.922842 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:45:49.922849 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:45:49.922855 | orchestrator | 2026-03-24 02:45:49.922862 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-24 02:45:49.922869 | orchestrator | Tuesday 24 March 2026 02:45:44 +0000 (0:00:00.329) 0:02:41.685 ********* 2026-03-24 02:45:49.922876 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:45:49.922883 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:45:49.922890 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:45:49.922896 | orchestrator | 2026-03-24 02:45:49.922904 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-24 02:45:49.922913 | orchestrator | Tuesday 24 March 2026 02:45:45 +0000 (0:00:01.388) 0:02:43.073 ********* 2026-03-24 02:45:49.922919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 02:45:49.922926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 02:45:49.922932 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 02:45:49.922939 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.922945 | orchestrator | 2026-03-24 02:45:49.922951 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-24 02:45:49.922958 | orchestrator | Tuesday 24 March 2026 02:45:46 +0000 (0:00:00.605) 0:02:43.679 ********* 2026-03-24 02:45:49.922965 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:45:49.922972 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:45:49.922979 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:45:49.922986 | orchestrator | 2026-03-24 02:45:49.922992 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 02:45:49.922999 | orchestrator | Tuesday 24 March 2026 02:45:46 +0000 (0:00:00.319) 0:02:43.999 ********* 2026-03-24 02:45:49.923005 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:45:49.923012 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:45:49.923019 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:45:49.923026 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:45:49.923041 | orchestrator | 2026-03-24 02:45:49.923048 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-24 02:45:49.923055 | orchestrator | Tuesday 24 March 2026 02:45:47 +0000 (0:00:00.976) 0:02:44.976 ********* 2026-03-24 02:45:49.923062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:45:49.923069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:45:49.923076 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:45:49.923082 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923089 | orchestrator | 2026-03-24 02:45:49.923096 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-24 02:45:49.923102 | orchestrator | Tuesday 24 March 2026 02:45:47 +0000 (0:00:00.410) 0:02:45.387 ********* 2026-03-24 02:45:49.923110 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923118 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:49.923125 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:49.923132 | orchestrator | 2026-03-24 02:45:49.923140 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-24 02:45:49.923146 | orchestrator | Tuesday 24 March 2026 02:45:48 +0000 (0:00:00.310) 0:02:45.698 ********* 2026-03-24 02:45:49.923153 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923160 | orchestrator | 2026-03-24 02:45:49.923166 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-24 02:45:49.923173 | orchestrator | Tuesday 24 March 2026 02:45:48 +0000 (0:00:00.228) 0:02:45.926 ********* 2026-03-24 02:45:49.923180 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923187 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:45:49.923193 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:45:49.923200 | orchestrator | 2026-03-24 02:45:49.923208 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-24 02:45:49.923215 | orchestrator | Tuesday 24 March 2026 02:45:48 +0000 (0:00:00.307) 0:02:46.233 ********* 2026-03-24 02:45:49.923222 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923229 | orchestrator | 2026-03-24 02:45:49.923236 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-24 02:45:49.923243 | orchestrator | Tuesday 24 March 2026 02:45:49 +0000 (0:00:00.652) 0:02:46.886 ********* 2026-03-24 02:45:49.923250 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923257 | orchestrator | 2026-03-24 02:45:49.923264 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-24 02:45:49.923271 | orchestrator | Tuesday 24 March 2026 02:45:49 +0000 (0:00:00.227) 0:02:47.114 ********* 2026-03-24 02:45:49.923278 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923286 | orchestrator | 2026-03-24 02:45:49.923292 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-24 02:45:49.923299 | orchestrator | Tuesday 24 March 2026 02:45:49 +0000 (0:00:00.133) 0:02:47.247 ********* 2026-03-24 02:45:49.923305 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:45:49.923312 | orchestrator | 2026-03-24 02:45:49.923327 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-24 02:46:05.604432 | orchestrator | Tuesday 24 March 2026 02:45:49 +0000 (0:00:00.221) 0:02:47.469 ********* 2026-03-24 02:46:05.604561 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.604584 | orchestrator | 2026-03-24 02:46:05.604601 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-24 02:46:05.604615 | orchestrator | Tuesday 24 March 2026 02:45:50 +0000 (0:00:00.228) 0:02:47.697 ********* 2026-03-24 02:46:05.604628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:46:05.604643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:46:05.604657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:46:05.604671 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.604684 | orchestrator | 2026-03-24 02:46:05.604734 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-24 02:46:05.604777 | orchestrator | Tuesday 24 March 2026 02:45:50 +0000 (0:00:00.377) 0:02:48.075 ********* 2026-03-24 02:46:05.604793 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.604807 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:46:05.604820 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:46:05.604833 | orchestrator | 2026-03-24 02:46:05.604847 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-24 02:46:05.604861 | orchestrator | Tuesday 24 March 2026 02:45:50 +0000 (0:00:00.302) 0:02:48.377 ********* 2026-03-24 02:46:05.604875 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.604889 | orchestrator | 2026-03-24 02:46:05.604903 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-24 02:46:05.604918 | orchestrator | Tuesday 24 March 2026 02:45:51 +0000 (0:00:00.231) 0:02:48.609 ********* 2026-03-24 02:46:05.604932 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.604947 | orchestrator | 2026-03-24 02:46:05.604962 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 02:46:05.604976 | orchestrator | Tuesday 24 March 2026 02:45:51 +0000 (0:00:00.202) 0:02:48.812 ********* 2026-03-24 02:46:05.604990 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:05.605005 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:05.605019 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:05.605034 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:46:05.605049 | orchestrator | 2026-03-24 02:46:05.605063 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-24 02:46:05.605078 | orchestrator | Tuesday 24 March 2026 02:45:52 +0000 (0:00:00.981) 0:02:49.793 ********* 2026-03-24 02:46:05.605093 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:46:05.605109 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:46:05.605125 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:46:05.605141 | orchestrator | 2026-03-24 02:46:05.605156 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-24 02:46:05.605171 | orchestrator | Tuesday 24 March 2026 02:45:52 +0000 (0:00:00.304) 0:02:50.098 ********* 2026-03-24 02:46:05.605186 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:46:05.605200 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:46:05.605214 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:46:05.605228 | orchestrator | 2026-03-24 02:46:05.605242 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-24 02:46:05.605257 | orchestrator | Tuesday 24 March 2026 02:45:53 +0000 (0:00:01.352) 0:02:51.451 ********* 2026-03-24 02:46:05.605271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:46:05.605286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:46:05.605296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:46:05.605304 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.605313 | orchestrator | 2026-03-24 02:46:05.605322 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-24 02:46:05.605331 | orchestrator | Tuesday 24 March 2026 02:45:54 +0000 (0:00:00.569) 0:02:52.020 ********* 2026-03-24 02:46:05.605340 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:46:05.605348 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:46:05.605357 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:46:05.605366 | orchestrator | 2026-03-24 02:46:05.605375 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 02:46:05.605383 | orchestrator | Tuesday 24 March 2026 02:45:54 +0000 (0:00:00.274) 0:02:52.295 ********* 2026-03-24 02:46:05.605392 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:05.605401 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:05.605410 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:05.605418 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:46:05.605440 | orchestrator | 2026-03-24 02:46:05.605449 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-24 02:46:05.605458 | orchestrator | Tuesday 24 March 2026 02:45:55 +0000 (0:00:00.832) 0:02:53.128 ********* 2026-03-24 02:46:05.605467 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:46:05.605475 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:46:05.605484 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:46:05.605493 | orchestrator | 2026-03-24 02:46:05.605501 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-24 02:46:05.605510 | orchestrator | Tuesday 24 March 2026 02:45:55 +0000 (0:00:00.281) 0:02:53.409 ********* 2026-03-24 02:46:05.605519 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:46:05.605528 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:46:05.605537 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:46:05.605545 | orchestrator | 2026-03-24 02:46:05.605554 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-24 02:46:05.605563 | orchestrator | Tuesday 24 March 2026 02:45:57 +0000 (0:00:01.199) 0:02:54.609 ********* 2026-03-24 02:46:05.605572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:46:05.605581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:46:05.605590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:46:05.605620 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.605630 | orchestrator | 2026-03-24 02:46:05.605649 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-24 02:46:05.605658 | orchestrator | Tuesday 24 March 2026 02:45:57 +0000 (0:00:00.740) 0:02:55.350 ********* 2026-03-24 02:46:05.605667 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:46:05.605676 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:46:05.605684 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:46:05.605693 | orchestrator | 2026-03-24 02:46:05.605764 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-24 02:46:05.605775 | orchestrator | Tuesday 24 March 2026 02:45:58 +0000 (0:00:00.442) 0:02:55.792 ********* 2026-03-24 02:46:05.605784 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.605792 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:46:05.605801 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:46:05.605809 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:05.605818 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:05.605826 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:05.605835 | orchestrator | 2026-03-24 02:46:05.605844 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 02:46:05.605852 | orchestrator | Tuesday 24 March 2026 02:45:58 +0000 (0:00:00.538) 0:02:56.331 ********* 2026-03-24 02:46:05.605861 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:46:05.605869 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:46:05.605884 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:46:05.605899 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:46:05.605914 | orchestrator | 2026-03-24 02:46:05.605927 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-24 02:46:05.605942 | orchestrator | Tuesday 24 March 2026 02:45:59 +0000 (0:00:00.858) 0:02:57.190 ********* 2026-03-24 02:46:05.605956 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:05.605970 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:05.605983 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:05.605997 | orchestrator | 2026-03-24 02:46:05.606012 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-24 02:46:05.606103 | orchestrator | Tuesday 24 March 2026 02:45:59 +0000 (0:00:00.280) 0:02:57.470 ********* 2026-03-24 02:46:05.606119 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:46:05.606134 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:46:05.606149 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:46:05.606176 | orchestrator | 2026-03-24 02:46:05.606190 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-24 02:46:05.606205 | orchestrator | Tuesday 24 March 2026 02:46:01 +0000 (0:00:01.201) 0:02:58.671 ********* 2026-03-24 02:46:05.606220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 02:46:05.606236 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 02:46:05.606251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 02:46:05.606266 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:05.606281 | orchestrator | 2026-03-24 02:46:05.606297 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-24 02:46:05.606311 | orchestrator | Tuesday 24 March 2026 02:46:02 +0000 (0:00:01.007) 0:02:59.678 ********* 2026-03-24 02:46:05.606326 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:05.606337 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:05.606346 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:05.606354 | orchestrator | 2026-03-24 02:46:05.606363 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-24 02:46:05.606371 | orchestrator | 2026-03-24 02:46:05.606380 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 02:46:05.606389 | orchestrator | Tuesday 24 March 2026 02:46:02 +0000 (0:00:00.546) 0:03:00.224 ********* 2026-03-24 02:46:05.606398 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:46:05.606408 | orchestrator | 2026-03-24 02:46:05.606417 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 02:46:05.606426 | orchestrator | Tuesday 24 March 2026 02:46:03 +0000 (0:00:00.658) 0:03:00.883 ********* 2026-03-24 02:46:05.606434 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:46:05.606443 | orchestrator | 2026-03-24 02:46:05.606452 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 02:46:05.606460 | orchestrator | Tuesday 24 March 2026 02:46:03 +0000 (0:00:00.503) 0:03:01.386 ********* 2026-03-24 02:46:05.606469 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:05.606478 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:05.606486 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:05.606495 | orchestrator | 2026-03-24 02:46:05.606503 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 02:46:05.606512 | orchestrator | Tuesday 24 March 2026 02:46:04 +0000 (0:00:00.692) 0:03:02.079 ********* 2026-03-24 02:46:05.606520 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:05.606529 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:05.606537 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:05.606546 | orchestrator | 2026-03-24 02:46:05.606554 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 02:46:05.606563 | orchestrator | Tuesday 24 March 2026 02:46:04 +0000 (0:00:00.475) 0:03:02.555 ********* 2026-03-24 02:46:05.606571 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:05.606580 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:05.606588 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:05.606597 | orchestrator | 2026-03-24 02:46:05.606605 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 02:46:05.606614 | orchestrator | Tuesday 24 March 2026 02:46:05 +0000 (0:00:00.300) 0:03:02.855 ********* 2026-03-24 02:46:05.606622 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:05.606631 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:05.606639 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:05.606648 | orchestrator | 2026-03-24 02:46:05.606656 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 02:46:05.606692 | orchestrator | Tuesday 24 March 2026 02:46:05 +0000 (0:00:00.298) 0:03:03.153 ********* 2026-03-24 02:46:25.403305 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.403492 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.403513 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.403529 | orchestrator | 2026-03-24 02:46:25.403547 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 02:46:25.403564 | orchestrator | Tuesday 24 March 2026 02:46:06 +0000 (0:00:00.707) 0:03:03.860 ********* 2026-03-24 02:46:25.403580 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.403597 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.403612 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.403626 | orchestrator | 2026-03-24 02:46:25.403642 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 02:46:25.403656 | orchestrator | Tuesday 24 March 2026 02:46:06 +0000 (0:00:00.495) 0:03:04.355 ********* 2026-03-24 02:46:25.403671 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.403686 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.403701 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.403741 | orchestrator | 2026-03-24 02:46:25.403757 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 02:46:25.403772 | orchestrator | Tuesday 24 March 2026 02:46:07 +0000 (0:00:00.320) 0:03:04.676 ********* 2026-03-24 02:46:25.403786 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.403801 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.403815 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.403830 | orchestrator | 2026-03-24 02:46:25.403846 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 02:46:25.403863 | orchestrator | Tuesday 24 March 2026 02:46:07 +0000 (0:00:00.716) 0:03:05.393 ********* 2026-03-24 02:46:25.403879 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.403894 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.403910 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.403925 | orchestrator | 2026-03-24 02:46:25.403941 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 02:46:25.403957 | orchestrator | Tuesday 24 March 2026 02:46:08 +0000 (0:00:00.740) 0:03:06.133 ********* 2026-03-24 02:46:25.403973 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.403989 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.404016 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.404032 | orchestrator | 2026-03-24 02:46:25.404049 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 02:46:25.404065 | orchestrator | Tuesday 24 March 2026 02:46:09 +0000 (0:00:00.495) 0:03:06.629 ********* 2026-03-24 02:46:25.404081 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.404096 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.404112 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.404128 | orchestrator | 2026-03-24 02:46:25.404144 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 02:46:25.404160 | orchestrator | Tuesday 24 March 2026 02:46:09 +0000 (0:00:00.344) 0:03:06.973 ********* 2026-03-24 02:46:25.404176 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.404192 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.404207 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.404221 | orchestrator | 2026-03-24 02:46:25.404236 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 02:46:25.404251 | orchestrator | Tuesday 24 March 2026 02:46:09 +0000 (0:00:00.301) 0:03:07.275 ********* 2026-03-24 02:46:25.404266 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.404281 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.404296 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.404311 | orchestrator | 2026-03-24 02:46:25.404326 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 02:46:25.404341 | orchestrator | Tuesday 24 March 2026 02:46:10 +0000 (0:00:00.288) 0:03:07.563 ********* 2026-03-24 02:46:25.404356 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.404371 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.404386 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.404412 | orchestrator | 2026-03-24 02:46:25.404427 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 02:46:25.404442 | orchestrator | Tuesday 24 March 2026 02:46:10 +0000 (0:00:00.500) 0:03:08.064 ********* 2026-03-24 02:46:25.404458 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.404472 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.404487 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.404502 | orchestrator | 2026-03-24 02:46:25.404517 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 02:46:25.404531 | orchestrator | Tuesday 24 March 2026 02:46:10 +0000 (0:00:00.308) 0:03:08.373 ********* 2026-03-24 02:46:25.404547 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.404561 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:46:25.404576 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:46:25.404591 | orchestrator | 2026-03-24 02:46:25.404605 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 02:46:25.404621 | orchestrator | Tuesday 24 March 2026 02:46:11 +0000 (0:00:00.299) 0:03:08.672 ********* 2026-03-24 02:46:25.404635 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.404650 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.404664 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.404679 | orchestrator | 2026-03-24 02:46:25.404693 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 02:46:25.404769 | orchestrator | Tuesday 24 March 2026 02:46:11 +0000 (0:00:00.323) 0:03:08.996 ********* 2026-03-24 02:46:25.404788 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.404803 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.404818 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.404832 | orchestrator | 2026-03-24 02:46:25.404847 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 02:46:25.404862 | orchestrator | Tuesday 24 March 2026 02:46:11 +0000 (0:00:00.524) 0:03:09.521 ********* 2026-03-24 02:46:25.404877 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.404891 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.404906 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.404921 | orchestrator | 2026-03-24 02:46:25.404936 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-24 02:46:25.404951 | orchestrator | Tuesday 24 March 2026 02:46:12 +0000 (0:00:00.527) 0:03:10.049 ********* 2026-03-24 02:46:25.404983 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.404998 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.405033 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.405049 | orchestrator | 2026-03-24 02:46:25.405064 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-24 02:46:25.405079 | orchestrator | Tuesday 24 March 2026 02:46:12 +0000 (0:00:00.311) 0:03:10.360 ********* 2026-03-24 02:46:25.405095 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:46:25.405109 | orchestrator | 2026-03-24 02:46:25.405124 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-24 02:46:25.405139 | orchestrator | Tuesday 24 March 2026 02:46:13 +0000 (0:00:00.786) 0:03:11.146 ********* 2026-03-24 02:46:25.405154 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:46:25.405168 | orchestrator | 2026-03-24 02:46:25.405182 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-24 02:46:25.405197 | orchestrator | Tuesday 24 March 2026 02:46:13 +0000 (0:00:00.163) 0:03:11.309 ********* 2026-03-24 02:46:25.405212 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 02:46:25.405226 | orchestrator | 2026-03-24 02:46:25.405242 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-24 02:46:25.405256 | orchestrator | Tuesday 24 March 2026 02:46:14 +0000 (0:00:00.953) 0:03:12.263 ********* 2026-03-24 02:46:25.405270 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.405285 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.405300 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.405324 | orchestrator | 2026-03-24 02:46:25.405338 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-24 02:46:25.405353 | orchestrator | Tuesday 24 March 2026 02:46:15 +0000 (0:00:00.325) 0:03:12.589 ********* 2026-03-24 02:46:25.405367 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.405382 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.405395 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.405409 | orchestrator | 2026-03-24 02:46:25.405424 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-24 02:46:25.405438 | orchestrator | Tuesday 24 March 2026 02:46:15 +0000 (0:00:00.605) 0:03:13.194 ********* 2026-03-24 02:46:25.405453 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:46:25.405467 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:46:25.405483 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:46:25.405497 | orchestrator | 2026-03-24 02:46:25.405512 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-24 02:46:25.405526 | orchestrator | Tuesday 24 March 2026 02:46:16 +0000 (0:00:01.117) 0:03:14.311 ********* 2026-03-24 02:46:25.405541 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:46:25.405555 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:46:25.405570 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:46:25.405584 | orchestrator | 2026-03-24 02:46:25.405599 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-24 02:46:25.405613 | orchestrator | Tuesday 24 March 2026 02:46:17 +0000 (0:00:00.749) 0:03:15.061 ********* 2026-03-24 02:46:25.405627 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:46:25.405642 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:46:25.405656 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:46:25.405670 | orchestrator | 2026-03-24 02:46:25.405685 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-24 02:46:25.405700 | orchestrator | Tuesday 24 March 2026 02:46:18 +0000 (0:00:00.692) 0:03:15.753 ********* 2026-03-24 02:46:25.405733 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.405749 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:46:25.405763 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:46:25.405777 | orchestrator | 2026-03-24 02:46:25.405793 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-24 02:46:25.405808 | orchestrator | Tuesday 24 March 2026 02:46:19 +0000 (0:00:00.915) 0:03:16.668 ********* 2026-03-24 02:46:25.405823 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:46:25.405837 | orchestrator | 2026-03-24 02:46:25.405852 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-24 02:46:25.405867 | orchestrator | Tuesday 24 March 2026 02:46:20 +0000 (0:00:01.314) 0:03:17.982 ********* 2026-03-24 02:46:25.405882 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:46:25.405896 | orchestrator | 2026-03-24 02:46:25.405911 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-24 02:46:25.405926 | orchestrator | Tuesday 24 March 2026 02:46:21 +0000 (0:00:00.702) 0:03:18.684 ********* 2026-03-24 02:46:25.405942 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 02:46:25.405956 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:46:25.405971 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:46:25.405986 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:46:25.406001 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-24 02:46:25.406078 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:46:25.406094 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:46:25.406110 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-24 02:46:25.406125 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:46:25.406140 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-24 02:46:25.406165 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-24 02:46:25.406181 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-24 02:46:25.406196 | orchestrator | 2026-03-24 02:46:25.406212 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-24 02:46:25.406227 | orchestrator | Tuesday 24 March 2026 02:46:24 +0000 (0:00:03.156) 0:03:21.841 ********* 2026-03-24 02:46:25.406242 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:46:25.406258 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:46:25.406273 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:46:25.406289 | orchestrator | 2026-03-24 02:46:25.406304 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-24 02:46:25.406334 | orchestrator | Tuesday 24 March 2026 02:46:25 +0000 (0:00:01.107) 0:03:22.948 ********* 2026-03-24 02:47:27.555015 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:27.555118 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:27.555134 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:27.555146 | orchestrator | 2026-03-24 02:47:27.555155 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-24 02:47:27.555163 | orchestrator | Tuesday 24 March 2026 02:46:25 +0000 (0:00:00.519) 0:03:23.467 ********* 2026-03-24 02:47:27.555170 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:27.555176 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:27.555183 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:27.555189 | orchestrator | 2026-03-24 02:47:27.555196 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-24 02:47:27.555202 | orchestrator | Tuesday 24 March 2026 02:46:26 +0000 (0:00:00.324) 0:03:23.792 ********* 2026-03-24 02:47:27.555209 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:47:27.555216 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:47:27.555222 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:47:27.555228 | orchestrator | 2026-03-24 02:47:27.555235 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-24 02:47:27.555241 | orchestrator | Tuesday 24 March 2026 02:46:27 +0000 (0:00:01.403) 0:03:25.195 ********* 2026-03-24 02:47:27.555247 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:47:27.555253 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:47:27.555260 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:47:27.555266 | orchestrator | 2026-03-24 02:47:27.555272 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-24 02:47:27.555278 | orchestrator | Tuesday 24 March 2026 02:46:28 +0000 (0:00:01.247) 0:03:26.443 ********* 2026-03-24 02:47:27.555284 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:27.555292 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:27.555302 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:27.555312 | orchestrator | 2026-03-24 02:47:27.555322 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-24 02:47:27.555331 | orchestrator | Tuesday 24 March 2026 02:46:29 +0000 (0:00:00.519) 0:03:26.962 ********* 2026-03-24 02:47:27.555340 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:47:27.555347 | orchestrator | 2026-03-24 02:47:27.555353 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-24 02:47:27.555360 | orchestrator | Tuesday 24 March 2026 02:46:29 +0000 (0:00:00.505) 0:03:27.467 ********* 2026-03-24 02:47:27.555366 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:27.555372 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:27.555378 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:27.555384 | orchestrator | 2026-03-24 02:47:27.555391 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-24 02:47:27.555397 | orchestrator | Tuesday 24 March 2026 02:46:30 +0000 (0:00:00.304) 0:03:27.772 ********* 2026-03-24 02:47:27.555405 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:27.555416 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:27.555427 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:27.555455 | orchestrator | 2026-03-24 02:47:27.555464 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-24 02:47:27.555475 | orchestrator | Tuesday 24 March 2026 02:46:30 +0000 (0:00:00.508) 0:03:28.280 ********* 2026-03-24 02:47:27.555486 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:47:27.555493 | orchestrator | 2026-03-24 02:47:27.555500 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-24 02:47:27.555506 | orchestrator | Tuesday 24 March 2026 02:46:31 +0000 (0:00:00.510) 0:03:28.791 ********* 2026-03-24 02:47:27.555512 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:47:27.555518 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:47:27.555524 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:47:27.555533 | orchestrator | 2026-03-24 02:47:27.555544 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-24 02:47:27.555555 | orchestrator | Tuesday 24 March 2026 02:46:33 +0000 (0:00:01.785) 0:03:30.576 ********* 2026-03-24 02:47:27.555562 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:47:27.555568 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:47:27.555574 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:47:27.555580 | orchestrator | 2026-03-24 02:47:27.555586 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-24 02:47:27.555593 | orchestrator | Tuesday 24 March 2026 02:46:34 +0000 (0:00:01.402) 0:03:31.979 ********* 2026-03-24 02:47:27.555599 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:47:27.555610 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:47:27.555622 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:47:27.555629 | orchestrator | 2026-03-24 02:47:27.555635 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-24 02:47:27.555641 | orchestrator | Tuesday 24 March 2026 02:46:36 +0000 (0:00:01.820) 0:03:33.799 ********* 2026-03-24 02:47:27.555648 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:47:27.555654 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:47:27.555660 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:47:27.555666 | orchestrator | 2026-03-24 02:47:27.555672 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-24 02:47:27.555678 | orchestrator | Tuesday 24 March 2026 02:46:38 +0000 (0:00:02.198) 0:03:35.997 ********* 2026-03-24 02:47:27.555684 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:47:27.555690 | orchestrator | 2026-03-24 02:47:27.555697 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-24 02:47:27.555703 | orchestrator | Tuesday 24 March 2026 02:46:39 +0000 (0:00:00.727) 0:03:36.725 ********* 2026-03-24 02:47:27.555709 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-24 02:47:27.555715 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:27.555721 | orchestrator | 2026-03-24 02:47:27.555739 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-24 02:47:27.555785 | orchestrator | Tuesday 24 March 2026 02:47:00 +0000 (0:00:21.822) 0:03:58.548 ********* 2026-03-24 02:47:27.555793 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:27.555800 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:27.555811 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:27.555823 | orchestrator | 2026-03-24 02:47:27.555831 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-24 02:47:27.555838 | orchestrator | Tuesday 24 March 2026 02:47:11 +0000 (0:00:10.487) 0:04:09.036 ********* 2026-03-24 02:47:27.555844 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:27.555850 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:27.555856 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:27.555862 | orchestrator | 2026-03-24 02:47:27.555868 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-24 02:47:27.555882 | orchestrator | Tuesday 24 March 2026 02:47:11 +0000 (0:00:00.303) 0:04:09.340 ********* 2026-03-24 02:47:27.555890 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e56cc796c8827e5eb615eff617aa4d58efdd649'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-24 02:47:27.555899 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e56cc796c8827e5eb615eff617aa4d58efdd649'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-24 02:47:27.555907 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e56cc796c8827e5eb615eff617aa4d58efdd649'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-24 02:47:27.555914 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e56cc796c8827e5eb615eff617aa4d58efdd649'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-24 02:47:27.555921 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e56cc796c8827e5eb615eff617aa4d58efdd649'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-24 02:47:27.555928 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e56cc796c8827e5eb615eff617aa4d58efdd649'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5e56cc796c8827e5eb615eff617aa4d58efdd649'}])  2026-03-24 02:47:27.555936 | orchestrator | 2026-03-24 02:47:27.555942 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 02:47:27.555948 | orchestrator | Tuesday 24 March 2026 02:47:25 +0000 (0:00:14.088) 0:04:23.428 ********* 2026-03-24 02:47:27.555955 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:27.555961 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:27.555967 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:27.555973 | orchestrator | 2026-03-24 02:47:27.555979 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 02:47:27.555986 | orchestrator | Tuesday 24 March 2026 02:47:26 +0000 (0:00:00.328) 0:04:23.757 ********* 2026-03-24 02:47:27.555996 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:47:27.556007 | orchestrator | 2026-03-24 02:47:27.556018 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-24 02:47:27.556028 | orchestrator | Tuesday 24 March 2026 02:47:26 +0000 (0:00:00.711) 0:04:24.468 ********* 2026-03-24 02:47:27.556037 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:27.556047 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:27.556054 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:27.556060 | orchestrator | 2026-03-24 02:47:27.556067 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-24 02:47:27.556073 | orchestrator | Tuesday 24 March 2026 02:47:27 +0000 (0:00:00.306) 0:04:24.775 ********* 2026-03-24 02:47:27.556084 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:27.556090 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:27.556097 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:27.556108 | orchestrator | 2026-03-24 02:47:27.556125 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-24 02:47:52.562921 | orchestrator | Tuesday 24 March 2026 02:47:27 +0000 (0:00:00.323) 0:04:25.098 ********* 2026-03-24 02:47:52.563007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 02:47:52.563015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 02:47:52.563020 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 02:47:52.563025 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563029 | orchestrator | 2026-03-24 02:47:52.563035 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-24 02:47:52.563039 | orchestrator | Tuesday 24 March 2026 02:47:28 +0000 (0:00:00.830) 0:04:25.929 ********* 2026-03-24 02:47:52.563044 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563049 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563053 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563057 | orchestrator | 2026-03-24 02:47:52.563062 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-24 02:47:52.563066 | orchestrator | 2026-03-24 02:47:52.563070 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 02:47:52.563075 | orchestrator | Tuesday 24 March 2026 02:47:29 +0000 (0:00:00.767) 0:04:26.696 ********* 2026-03-24 02:47:52.563079 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:47:52.563085 | orchestrator | 2026-03-24 02:47:52.563089 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 02:47:52.563093 | orchestrator | Tuesday 24 March 2026 02:47:29 +0000 (0:00:00.508) 0:04:27.205 ********* 2026-03-24 02:47:52.563098 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:47:52.563102 | orchestrator | 2026-03-24 02:47:52.563106 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 02:47:52.563110 | orchestrator | Tuesday 24 March 2026 02:47:30 +0000 (0:00:00.685) 0:04:27.890 ********* 2026-03-24 02:47:52.563114 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563118 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563122 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563126 | orchestrator | 2026-03-24 02:47:52.563130 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 02:47:52.563135 | orchestrator | Tuesday 24 March 2026 02:47:31 +0000 (0:00:00.746) 0:04:28.637 ********* 2026-03-24 02:47:52.563139 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563143 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563147 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563151 | orchestrator | 2026-03-24 02:47:52.563155 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 02:47:52.563159 | orchestrator | Tuesday 24 March 2026 02:47:31 +0000 (0:00:00.290) 0:04:28.927 ********* 2026-03-24 02:47:52.563163 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563167 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563171 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563175 | orchestrator | 2026-03-24 02:47:52.563180 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 02:47:52.563184 | orchestrator | Tuesday 24 March 2026 02:47:31 +0000 (0:00:00.480) 0:04:29.408 ********* 2026-03-24 02:47:52.563188 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563192 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563197 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563201 | orchestrator | 2026-03-24 02:47:52.563205 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 02:47:52.563227 | orchestrator | Tuesday 24 March 2026 02:47:32 +0000 (0:00:00.296) 0:04:29.704 ********* 2026-03-24 02:47:52.563231 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563235 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563239 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563243 | orchestrator | 2026-03-24 02:47:52.563247 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 02:47:52.563251 | orchestrator | Tuesday 24 March 2026 02:47:32 +0000 (0:00:00.699) 0:04:30.404 ********* 2026-03-24 02:47:52.563256 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563260 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563264 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563268 | orchestrator | 2026-03-24 02:47:52.563272 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 02:47:52.563276 | orchestrator | Tuesday 24 March 2026 02:47:33 +0000 (0:00:00.302) 0:04:30.707 ********* 2026-03-24 02:47:52.563280 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563285 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563291 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563297 | orchestrator | 2026-03-24 02:47:52.563304 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 02:47:52.563313 | orchestrator | Tuesday 24 March 2026 02:47:33 +0000 (0:00:00.496) 0:04:31.203 ********* 2026-03-24 02:47:52.563322 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563328 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563335 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563342 | orchestrator | 2026-03-24 02:47:52.563348 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 02:47:52.563355 | orchestrator | Tuesday 24 March 2026 02:47:34 +0000 (0:00:00.728) 0:04:31.932 ********* 2026-03-24 02:47:52.563361 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563367 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563372 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563378 | orchestrator | 2026-03-24 02:47:52.563386 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 02:47:52.563393 | orchestrator | Tuesday 24 March 2026 02:47:35 +0000 (0:00:00.720) 0:04:32.652 ********* 2026-03-24 02:47:52.563399 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563405 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563412 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563418 | orchestrator | 2026-03-24 02:47:52.563438 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 02:47:52.563461 | orchestrator | Tuesday 24 March 2026 02:47:35 +0000 (0:00:00.350) 0:04:33.003 ********* 2026-03-24 02:47:52.563468 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563472 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563476 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563480 | orchestrator | 2026-03-24 02:47:52.563484 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 02:47:52.563488 | orchestrator | Tuesday 24 March 2026 02:47:36 +0000 (0:00:00.563) 0:04:33.566 ********* 2026-03-24 02:47:52.563492 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563496 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563500 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563505 | orchestrator | 2026-03-24 02:47:52.563509 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 02:47:52.563513 | orchestrator | Tuesday 24 March 2026 02:47:36 +0000 (0:00:00.302) 0:04:33.869 ********* 2026-03-24 02:47:52.563517 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563521 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563525 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563529 | orchestrator | 2026-03-24 02:47:52.563533 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 02:47:52.563537 | orchestrator | Tuesday 24 March 2026 02:47:36 +0000 (0:00:00.331) 0:04:34.200 ********* 2026-03-24 02:47:52.563547 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563551 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563555 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563559 | orchestrator | 2026-03-24 02:47:52.563563 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 02:47:52.563567 | orchestrator | Tuesday 24 March 2026 02:47:36 +0000 (0:00:00.284) 0:04:34.485 ********* 2026-03-24 02:47:52.563571 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563575 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563579 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563583 | orchestrator | 2026-03-24 02:47:52.563587 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 02:47:52.563591 | orchestrator | Tuesday 24 March 2026 02:47:37 +0000 (0:00:00.503) 0:04:34.989 ********* 2026-03-24 02:47:52.563595 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563599 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563603 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563607 | orchestrator | 2026-03-24 02:47:52.563611 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 02:47:52.563616 | orchestrator | Tuesday 24 March 2026 02:47:37 +0000 (0:00:00.304) 0:04:35.294 ********* 2026-03-24 02:47:52.563620 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563624 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563628 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563632 | orchestrator | 2026-03-24 02:47:52.563636 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 02:47:52.563640 | orchestrator | Tuesday 24 March 2026 02:47:38 +0000 (0:00:00.334) 0:04:35.629 ********* 2026-03-24 02:47:52.563644 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563648 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563652 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563656 | orchestrator | 2026-03-24 02:47:52.563660 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 02:47:52.563664 | orchestrator | Tuesday 24 March 2026 02:47:38 +0000 (0:00:00.333) 0:04:35.962 ********* 2026-03-24 02:47:52.563668 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563672 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:47:52.563677 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:47:52.563681 | orchestrator | 2026-03-24 02:47:52.563685 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-24 02:47:52.563689 | orchestrator | Tuesday 24 March 2026 02:47:39 +0000 (0:00:00.778) 0:04:36.741 ********* 2026-03-24 02:47:52.563754 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 02:47:52.563777 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:47:52.563782 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:47:52.563786 | orchestrator | 2026-03-24 02:47:52.563790 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-24 02:47:52.563794 | orchestrator | Tuesday 24 March 2026 02:47:39 +0000 (0:00:00.631) 0:04:37.372 ********* 2026-03-24 02:47:52.563798 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:47:52.563803 | orchestrator | 2026-03-24 02:47:52.563807 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-24 02:47:52.563811 | orchestrator | Tuesday 24 March 2026 02:47:40 +0000 (0:00:00.504) 0:04:37.876 ********* 2026-03-24 02:47:52.563815 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:47:52.563819 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:47:52.563823 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:47:52.563827 | orchestrator | 2026-03-24 02:47:52.563831 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-24 02:47:52.563835 | orchestrator | Tuesday 24 March 2026 02:47:41 +0000 (0:00:00.992) 0:04:38.869 ********* 2026-03-24 02:47:52.563844 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:47:52.563849 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:47:52.563853 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:47:52.563857 | orchestrator | 2026-03-24 02:47:52.563861 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-24 02:47:52.563865 | orchestrator | Tuesday 24 March 2026 02:47:41 +0000 (0:00:00.313) 0:04:39.183 ********* 2026-03-24 02:47:52.563871 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 02:47:52.563878 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 02:47:52.563888 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 02:47:52.563896 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-24 02:47:52.563902 | orchestrator | 2026-03-24 02:47:52.563908 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-24 02:47:52.563920 | orchestrator | Tuesday 24 March 2026 02:47:52 +0000 (0:00:10.553) 0:04:49.737 ********* 2026-03-24 02:47:52.563926 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:47:52.563939 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:48:54.580974 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:48:54.581092 | orchestrator | 2026-03-24 02:48:54.581110 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-24 02:48:54.581123 | orchestrator | Tuesday 24 March 2026 02:47:52 +0000 (0:00:00.373) 0:04:50.111 ********* 2026-03-24 02:48:54.581135 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-24 02:48:54.581146 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-24 02:48:54.581158 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-24 02:48:54.581169 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-24 02:48:54.581180 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:48:54.581191 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:48:54.581202 | orchestrator | 2026-03-24 02:48:54.581214 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-24 02:48:54.581225 | orchestrator | Tuesday 24 March 2026 02:47:55 +0000 (0:00:02.572) 0:04:52.684 ********* 2026-03-24 02:48:54.581236 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-24 02:48:54.581247 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-24 02:48:54.581258 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-24 02:48:54.581269 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 02:48:54.581280 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-24 02:48:54.581291 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-24 02:48:54.581302 | orchestrator | 2026-03-24 02:48:54.581313 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-24 02:48:54.581324 | orchestrator | Tuesday 24 March 2026 02:47:56 +0000 (0:00:01.229) 0:04:53.913 ********* 2026-03-24 02:48:54.581335 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:48:54.581346 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:48:54.581357 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:48:54.581368 | orchestrator | 2026-03-24 02:48:54.581379 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-24 02:48:54.581390 | orchestrator | Tuesday 24 March 2026 02:47:57 +0000 (0:00:00.745) 0:04:54.659 ********* 2026-03-24 02:48:54.581401 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:48:54.581412 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:48:54.581423 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:48:54.581434 | orchestrator | 2026-03-24 02:48:54.581444 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-24 02:48:54.581456 | orchestrator | Tuesday 24 March 2026 02:47:57 +0000 (0:00:00.311) 0:04:54.971 ********* 2026-03-24 02:48:54.581466 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:48:54.581478 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:48:54.581513 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:48:54.581527 | orchestrator | 2026-03-24 02:48:54.581539 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-24 02:48:54.581552 | orchestrator | Tuesday 24 March 2026 02:47:57 +0000 (0:00:00.525) 0:04:55.496 ********* 2026-03-24 02:48:54.581565 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:48:54.581577 | orchestrator | 2026-03-24 02:48:54.581590 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-24 02:48:54.581603 | orchestrator | Tuesday 24 March 2026 02:47:58 +0000 (0:00:00.521) 0:04:56.017 ********* 2026-03-24 02:48:54.581615 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:48:54.581627 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:48:54.581639 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:48:54.581652 | orchestrator | 2026-03-24 02:48:54.581664 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-24 02:48:54.581676 | orchestrator | Tuesday 24 March 2026 02:47:58 +0000 (0:00:00.317) 0:04:56.335 ********* 2026-03-24 02:48:54.581689 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:48:54.581701 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:48:54.581713 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:48:54.581726 | orchestrator | 2026-03-24 02:48:54.581738 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-24 02:48:54.581750 | orchestrator | Tuesday 24 March 2026 02:47:59 +0000 (0:00:00.547) 0:04:56.883 ********* 2026-03-24 02:48:54.581762 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:48:54.581775 | orchestrator | 2026-03-24 02:48:54.581839 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-24 02:48:54.581859 | orchestrator | Tuesday 24 March 2026 02:47:59 +0000 (0:00:00.504) 0:04:57.387 ********* 2026-03-24 02:48:54.581878 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:48:54.581897 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:48:54.581914 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:48:54.581932 | orchestrator | 2026-03-24 02:48:54.581949 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-24 02:48:54.581961 | orchestrator | Tuesday 24 March 2026 02:48:01 +0000 (0:00:01.267) 0:04:58.655 ********* 2026-03-24 02:48:54.581971 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:48:54.581982 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:48:54.581993 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:48:54.582003 | orchestrator | 2026-03-24 02:48:54.582069 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-24 02:48:54.582084 | orchestrator | Tuesday 24 March 2026 02:48:02 +0000 (0:00:01.534) 0:05:00.189 ********* 2026-03-24 02:48:54.582095 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:48:54.582105 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:48:54.582116 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:48:54.582127 | orchestrator | 2026-03-24 02:48:54.582138 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-24 02:48:54.582149 | orchestrator | Tuesday 24 March 2026 02:48:04 +0000 (0:00:01.846) 0:05:02.036 ********* 2026-03-24 02:48:54.582160 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:48:54.582184 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:48:54.582196 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:48:54.582206 | orchestrator | 2026-03-24 02:48:54.582238 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-24 02:48:54.582249 | orchestrator | Tuesday 24 March 2026 02:48:06 +0000 (0:00:01.978) 0:05:04.014 ********* 2026-03-24 02:48:54.582260 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:48:54.582271 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:48:54.582282 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-24 02:48:54.582293 | orchestrator | 2026-03-24 02:48:54.582304 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-24 02:48:54.582326 | orchestrator | Tuesday 24 March 2026 02:48:07 +0000 (0:00:00.604) 0:05:04.618 ********* 2026-03-24 02:48:54.582337 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-24 02:48:54.582348 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-24 02:48:54.582359 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-24 02:48:54.582370 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-24 02:48:54.582381 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-24 02:48:54.582392 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:48:54.582402 | orchestrator | 2026-03-24 02:48:54.582413 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-24 02:48:54.582424 | orchestrator | Tuesday 24 March 2026 02:48:37 +0000 (0:00:30.210) 0:05:34.829 ********* 2026-03-24 02:48:54.582435 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:48:54.582445 | orchestrator | 2026-03-24 02:48:54.582456 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-24 02:48:54.582467 | orchestrator | Tuesday 24 March 2026 02:48:38 +0000 (0:00:01.348) 0:05:36.177 ********* 2026-03-24 02:48:54.582478 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:48:54.582489 | orchestrator | 2026-03-24 02:48:54.582499 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-24 02:48:54.582510 | orchestrator | Tuesday 24 March 2026 02:48:38 +0000 (0:00:00.303) 0:05:36.481 ********* 2026-03-24 02:48:54.582521 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:48:54.582532 | orchestrator | 2026-03-24 02:48:54.582542 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-24 02:48:54.582553 | orchestrator | Tuesday 24 March 2026 02:48:39 +0000 (0:00:00.152) 0:05:36.634 ********* 2026-03-24 02:48:54.582564 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-24 02:48:54.582575 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-24 02:48:54.582585 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-24 02:48:54.582596 | orchestrator | 2026-03-24 02:48:54.582607 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-24 02:48:54.582617 | orchestrator | Tuesday 24 March 2026 02:48:45 +0000 (0:00:06.410) 0:05:43.044 ********* 2026-03-24 02:48:54.582628 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-24 02:48:54.582639 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-24 02:48:54.582650 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-24 02:48:54.582661 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-24 02:48:54.582671 | orchestrator | 2026-03-24 02:48:54.582682 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 02:48:54.582693 | orchestrator | Tuesday 24 March 2026 02:48:50 +0000 (0:00:05.460) 0:05:48.505 ********* 2026-03-24 02:48:54.582703 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:48:54.582714 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:48:54.582725 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:48:54.582735 | orchestrator | 2026-03-24 02:48:54.582746 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 02:48:54.582757 | orchestrator | Tuesday 24 March 2026 02:48:51 +0000 (0:00:00.708) 0:05:49.213 ********* 2026-03-24 02:48:54.582767 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:48:54.582780 | orchestrator | 2026-03-24 02:48:54.582832 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-24 02:48:54.582849 | orchestrator | Tuesday 24 March 2026 02:48:52 +0000 (0:00:00.520) 0:05:49.733 ********* 2026-03-24 02:48:54.582865 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:48:54.582881 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:48:54.582899 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:48:54.582916 | orchestrator | 2026-03-24 02:48:54.582942 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-24 02:48:54.582962 | orchestrator | Tuesday 24 March 2026 02:48:52 +0000 (0:00:00.542) 0:05:50.276 ********* 2026-03-24 02:48:54.582980 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:48:54.582996 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:48:54.583014 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:48:54.583033 | orchestrator | 2026-03-24 02:48:54.583049 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-24 02:48:54.583067 | orchestrator | Tuesday 24 March 2026 02:48:53 +0000 (0:00:01.238) 0:05:51.515 ********* 2026-03-24 02:48:54.583085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 02:48:54.583104 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 02:48:54.583133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 02:48:54.583152 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:48:54.583170 | orchestrator | 2026-03-24 02:48:54.583203 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-24 02:49:12.653633 | orchestrator | Tuesday 24 March 2026 02:48:54 +0000 (0:00:00.614) 0:05:52.130 ********* 2026-03-24 02:49:12.653745 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:49:12.653760 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:49:12.653770 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:49:12.653781 | orchestrator | 2026-03-24 02:49:12.653844 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-24 02:49:12.653856 | orchestrator | 2026-03-24 02:49:12.653867 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 02:49:12.653877 | orchestrator | Tuesday 24 March 2026 02:48:55 +0000 (0:00:00.534) 0:05:52.664 ********* 2026-03-24 02:49:12.653888 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:49:12.653898 | orchestrator | 2026-03-24 02:49:12.653908 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 02:49:12.653918 | orchestrator | Tuesday 24 March 2026 02:48:55 +0000 (0:00:00.738) 0:05:53.402 ********* 2026-03-24 02:49:12.653927 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:49:12.653938 | orchestrator | 2026-03-24 02:49:12.653947 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 02:49:12.653957 | orchestrator | Tuesday 24 March 2026 02:48:56 +0000 (0:00:00.710) 0:05:54.113 ********* 2026-03-24 02:49:12.653967 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.653977 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.653987 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.653997 | orchestrator | 2026-03-24 02:49:12.654006 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 02:49:12.654062 | orchestrator | Tuesday 24 March 2026 02:48:56 +0000 (0:00:00.324) 0:05:54.437 ********* 2026-03-24 02:49:12.654073 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.654083 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.654093 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.654117 | orchestrator | 2026-03-24 02:49:12.654135 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 02:49:12.654152 | orchestrator | Tuesday 24 March 2026 02:48:57 +0000 (0:00:00.732) 0:05:55.170 ********* 2026-03-24 02:49:12.654169 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.654186 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.654227 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.654246 | orchestrator | 2026-03-24 02:49:12.654264 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 02:49:12.654282 | orchestrator | Tuesday 24 March 2026 02:48:58 +0000 (0:00:00.852) 0:05:56.022 ********* 2026-03-24 02:49:12.654300 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.654318 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.654335 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.654352 | orchestrator | 2026-03-24 02:49:12.654372 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 02:49:12.654391 | orchestrator | Tuesday 24 March 2026 02:48:59 +0000 (0:00:00.997) 0:05:57.020 ********* 2026-03-24 02:49:12.654402 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.654413 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.654424 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.654435 | orchestrator | 2026-03-24 02:49:12.654446 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 02:49:12.654457 | orchestrator | Tuesday 24 March 2026 02:48:59 +0000 (0:00:00.322) 0:05:57.342 ********* 2026-03-24 02:49:12.654468 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.654479 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.654489 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.654500 | orchestrator | 2026-03-24 02:49:12.654511 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 02:49:12.654522 | orchestrator | Tuesday 24 March 2026 02:49:00 +0000 (0:00:00.286) 0:05:57.629 ********* 2026-03-24 02:49:12.654532 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.654543 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.654553 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.654564 | orchestrator | 2026-03-24 02:49:12.654575 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 02:49:12.654645 | orchestrator | Tuesday 24 March 2026 02:49:00 +0000 (0:00:00.291) 0:05:57.920 ********* 2026-03-24 02:49:12.654659 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.654670 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.654681 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.654692 | orchestrator | 2026-03-24 02:49:12.654703 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 02:49:12.654713 | orchestrator | Tuesday 24 March 2026 02:49:01 +0000 (0:00:00.973) 0:05:58.894 ********* 2026-03-24 02:49:12.654724 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.654735 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.654745 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.654756 | orchestrator | 2026-03-24 02:49:12.654767 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 02:49:12.654778 | orchestrator | Tuesday 24 March 2026 02:49:02 +0000 (0:00:00.747) 0:05:59.642 ********* 2026-03-24 02:49:12.654811 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.654831 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.654848 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.654859 | orchestrator | 2026-03-24 02:49:12.654870 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 02:49:12.654880 | orchestrator | Tuesday 24 March 2026 02:49:02 +0000 (0:00:00.284) 0:05:59.926 ********* 2026-03-24 02:49:12.654891 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.654901 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.654912 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.654922 | orchestrator | 2026-03-24 02:49:12.654933 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 02:49:12.654958 | orchestrator | Tuesday 24 March 2026 02:49:02 +0000 (0:00:00.281) 0:06:00.207 ********* 2026-03-24 02:49:12.654969 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.654980 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.654990 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.655001 | orchestrator | 2026-03-24 02:49:12.655043 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 02:49:12.655055 | orchestrator | Tuesday 24 March 2026 02:49:03 +0000 (0:00:00.523) 0:06:00.731 ********* 2026-03-24 02:49:12.655066 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.655077 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.655087 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.655098 | orchestrator | 2026-03-24 02:49:12.655109 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 02:49:12.655120 | orchestrator | Tuesday 24 March 2026 02:49:03 +0000 (0:00:00.313) 0:06:01.044 ********* 2026-03-24 02:49:12.655130 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.655141 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.655151 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.655161 | orchestrator | 2026-03-24 02:49:12.655172 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 02:49:12.655183 | orchestrator | Tuesday 24 March 2026 02:49:03 +0000 (0:00:00.327) 0:06:01.372 ********* 2026-03-24 02:49:12.655194 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.655204 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.655215 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.655225 | orchestrator | 2026-03-24 02:49:12.655236 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 02:49:12.655247 | orchestrator | Tuesday 24 March 2026 02:49:04 +0000 (0:00:00.284) 0:06:01.657 ********* 2026-03-24 02:49:12.655257 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.655268 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.655278 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.655289 | orchestrator | 2026-03-24 02:49:12.655299 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 02:49:12.655310 | orchestrator | Tuesday 24 March 2026 02:49:04 +0000 (0:00:00.527) 0:06:02.185 ********* 2026-03-24 02:49:12.655321 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.655331 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.655342 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.655352 | orchestrator | 2026-03-24 02:49:12.655363 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 02:49:12.655373 | orchestrator | Tuesday 24 March 2026 02:49:04 +0000 (0:00:00.293) 0:06:02.478 ********* 2026-03-24 02:49:12.655384 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.655395 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.655405 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.655416 | orchestrator | 2026-03-24 02:49:12.655426 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 02:49:12.655437 | orchestrator | Tuesday 24 March 2026 02:49:05 +0000 (0:00:00.314) 0:06:02.792 ********* 2026-03-24 02:49:12.655448 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.655458 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.655469 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.655479 | orchestrator | 2026-03-24 02:49:12.655490 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-24 02:49:12.655501 | orchestrator | Tuesday 24 March 2026 02:49:05 +0000 (0:00:00.752) 0:06:03.545 ********* 2026-03-24 02:49:12.655511 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.655522 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.655532 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.655542 | orchestrator | 2026-03-24 02:49:12.655555 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-24 02:49:12.655574 | orchestrator | Tuesday 24 March 2026 02:49:06 +0000 (0:00:00.321) 0:06:03.867 ********* 2026-03-24 02:49:12.655592 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:49:12.655612 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:49:12.655629 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:49:12.655657 | orchestrator | 2026-03-24 02:49:12.655675 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-24 02:49:12.655692 | orchestrator | Tuesday 24 March 2026 02:49:06 +0000 (0:00:00.618) 0:06:04.486 ********* 2026-03-24 02:49:12.655710 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:49:12.655730 | orchestrator | 2026-03-24 02:49:12.655750 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-24 02:49:12.655769 | orchestrator | Tuesday 24 March 2026 02:49:07 +0000 (0:00:00.497) 0:06:04.983 ********* 2026-03-24 02:49:12.655838 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.655854 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.655875 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.655893 | orchestrator | 2026-03-24 02:49:12.655913 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-24 02:49:12.655934 | orchestrator | Tuesday 24 March 2026 02:49:07 +0000 (0:00:00.525) 0:06:05.509 ********* 2026-03-24 02:49:12.655955 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:49:12.655975 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:49:12.655997 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:49:12.656018 | orchestrator | 2026-03-24 02:49:12.656038 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-24 02:49:12.656053 | orchestrator | Tuesday 24 March 2026 02:49:08 +0000 (0:00:00.326) 0:06:05.836 ********* 2026-03-24 02:49:12.656064 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.656075 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.656086 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.656096 | orchestrator | 2026-03-24 02:49:12.656107 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-24 02:49:12.656117 | orchestrator | Tuesday 24 March 2026 02:49:08 +0000 (0:00:00.644) 0:06:06.480 ********* 2026-03-24 02:49:12.656128 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:49:12.656139 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:49:12.656149 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:49:12.656160 | orchestrator | 2026-03-24 02:49:12.656178 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-24 02:49:12.656189 | orchestrator | Tuesday 24 March 2026 02:49:09 +0000 (0:00:00.537) 0:06:07.018 ********* 2026-03-24 02:49:12.656211 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-24 02:50:15.609041 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-24 02:50:15.609982 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-24 02:50:15.610086 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-24 02:50:15.610121 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-24 02:50:15.610141 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-24 02:50:15.610160 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-24 02:50:15.610205 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-24 02:50:15.610225 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-24 02:50:15.610242 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-24 02:50:15.610259 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-24 02:50:15.610276 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-24 02:50:15.610294 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-24 02:50:15.610311 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-24 02:50:15.610359 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-24 02:50:15.610378 | orchestrator | 2026-03-24 02:50:15.610396 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-24 02:50:15.610413 | orchestrator | Tuesday 24 March 2026 02:49:12 +0000 (0:00:03.186) 0:06:10.204 ********* 2026-03-24 02:50:15.610424 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:15.610435 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:15.610444 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:15.610454 | orchestrator | 2026-03-24 02:50:15.610464 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-24 02:50:15.610474 | orchestrator | Tuesday 24 March 2026 02:49:12 +0000 (0:00:00.310) 0:06:10.515 ********* 2026-03-24 02:50:15.610484 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:50:15.610494 | orchestrator | 2026-03-24 02:50:15.610503 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-24 02:50:15.610513 | orchestrator | Tuesday 24 March 2026 02:49:13 +0000 (0:00:00.677) 0:06:11.192 ********* 2026-03-24 02:50:15.610523 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-24 02:50:15.610532 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-24 02:50:15.610542 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-24 02:50:15.610552 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-24 02:50:15.610562 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-24 02:50:15.610571 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-24 02:50:15.610581 | orchestrator | 2026-03-24 02:50:15.610591 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-24 02:50:15.610600 | orchestrator | Tuesday 24 March 2026 02:49:14 +0000 (0:00:01.031) 0:06:12.223 ********* 2026-03-24 02:50:15.610610 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:50:15.610619 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 02:50:15.610629 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 02:50:15.610638 | orchestrator | 2026-03-24 02:50:15.610648 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-24 02:50:15.610657 | orchestrator | Tuesday 24 March 2026 02:49:16 +0000 (0:00:02.251) 0:06:14.475 ********* 2026-03-24 02:50:15.610667 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-24 02:50:15.610677 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 02:50:15.610687 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:50:15.610696 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-24 02:50:15.610706 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 02:50:15.610716 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:50:15.610726 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-24 02:50:15.610735 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 02:50:15.610745 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:50:15.610754 | orchestrator | 2026-03-24 02:50:15.610764 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-24 02:50:15.610774 | orchestrator | Tuesday 24 March 2026 02:49:18 +0000 (0:00:01.232) 0:06:15.707 ********* 2026-03-24 02:50:15.610783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:50:15.610793 | orchestrator | 2026-03-24 02:50:15.610803 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-24 02:50:15.610848 | orchestrator | Tuesday 24 March 2026 02:49:20 +0000 (0:00:02.207) 0:06:17.915 ********* 2026-03-24 02:50:15.610877 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:50:15.610888 | orchestrator | 2026-03-24 02:50:15.610905 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-24 02:50:15.610915 | orchestrator | Tuesday 24 March 2026 02:49:21 +0000 (0:00:00.711) 0:06:18.627 ********* 2026-03-24 02:50:15.610948 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}) 2026-03-24 02:50:15.610959 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}) 2026-03-24 02:50:15.610969 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}) 2026-03-24 02:50:15.610979 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}) 2026-03-24 02:50:15.610989 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}) 2026-03-24 02:50:15.610999 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}) 2026-03-24 02:50:15.611008 | orchestrator | 2026-03-24 02:50:15.611018 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-24 02:50:15.611028 | orchestrator | Tuesday 24 March 2026 02:50:03 +0000 (0:00:42.905) 0:07:01.532 ********* 2026-03-24 02:50:15.611037 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:15.611047 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:15.611056 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:15.611066 | orchestrator | 2026-03-24 02:50:15.611076 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-24 02:50:15.611085 | orchestrator | Tuesday 24 March 2026 02:50:04 +0000 (0:00:00.282) 0:07:01.815 ********* 2026-03-24 02:50:15.611095 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:50:15.611104 | orchestrator | 2026-03-24 02:50:15.611114 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-24 02:50:15.611124 | orchestrator | Tuesday 24 March 2026 02:50:04 +0000 (0:00:00.726) 0:07:02.542 ********* 2026-03-24 02:50:15.611133 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:50:15.611143 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:50:15.611153 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:50:15.611162 | orchestrator | 2026-03-24 02:50:15.611172 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-24 02:50:15.611182 | orchestrator | Tuesday 24 March 2026 02:50:05 +0000 (0:00:00.690) 0:07:03.232 ********* 2026-03-24 02:50:15.611192 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:50:15.611201 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:50:15.611211 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:50:15.611220 | orchestrator | 2026-03-24 02:50:15.611230 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-24 02:50:15.611240 | orchestrator | Tuesday 24 March 2026 02:50:08 +0000 (0:00:02.707) 0:07:05.939 ********* 2026-03-24 02:50:15.611250 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:50:15.611259 | orchestrator | 2026-03-24 02:50:15.611269 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-24 02:50:15.611279 | orchestrator | Tuesday 24 March 2026 02:50:09 +0000 (0:00:00.700) 0:07:06.639 ********* 2026-03-24 02:50:15.611288 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:50:15.611298 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:50:15.611308 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:50:15.611317 | orchestrator | 2026-03-24 02:50:15.611327 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-24 02:50:15.611337 | orchestrator | Tuesday 24 March 2026 02:50:10 +0000 (0:00:01.199) 0:07:07.839 ********* 2026-03-24 02:50:15.611352 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:50:15.611362 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:50:15.611372 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:50:15.611381 | orchestrator | 2026-03-24 02:50:15.611391 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-24 02:50:15.611401 | orchestrator | Tuesday 24 March 2026 02:50:11 +0000 (0:00:01.208) 0:07:09.048 ********* 2026-03-24 02:50:15.611411 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:50:15.611420 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:50:15.611430 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:50:15.611439 | orchestrator | 2026-03-24 02:50:15.611449 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-24 02:50:15.611458 | orchestrator | Tuesday 24 March 2026 02:50:13 +0000 (0:00:02.293) 0:07:11.342 ********* 2026-03-24 02:50:15.611468 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:15.611477 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:15.611487 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:15.611497 | orchestrator | 2026-03-24 02:50:15.611506 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-24 02:50:15.611516 | orchestrator | Tuesday 24 March 2026 02:50:14 +0000 (0:00:00.329) 0:07:11.671 ********* 2026-03-24 02:50:15.611583 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:15.611593 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:15.611603 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:15.611613 | orchestrator | 2026-03-24 02:50:15.611622 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-24 02:50:15.611632 | orchestrator | Tuesday 24 March 2026 02:50:14 +0000 (0:00:00.343) 0:07:12.014 ********* 2026-03-24 02:50:15.611641 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 02:50:15.611657 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-24 02:50:15.611667 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-24 02:50:15.611677 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-24 02:50:15.611687 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-24 02:50:15.611696 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-24 02:50:15.611706 | orchestrator | 2026-03-24 02:50:15.611722 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-24 02:50:51.512408 | orchestrator | Tuesday 24 March 2026 02:50:15 +0000 (0:00:01.138) 0:07:13.153 ********* 2026-03-24 02:50:51.512496 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-24 02:50:51.512509 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-24 02:50:51.512519 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-24 02:50:51.512529 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-24 02:50:51.512539 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-24 02:50:51.512548 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-24 02:50:51.512557 | orchestrator | 2026-03-24 02:50:51.512567 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-24 02:50:51.512576 | orchestrator | Tuesday 24 March 2026 02:50:18 +0000 (0:00:02.445) 0:07:15.598 ********* 2026-03-24 02:50:51.512585 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-24 02:50:51.512594 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-24 02:50:51.512602 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-24 02:50:51.512611 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-24 02:50:51.512620 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-24 02:50:51.512629 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-24 02:50:51.512638 | orchestrator | 2026-03-24 02:50:51.512647 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-24 02:50:51.512656 | orchestrator | Tuesday 24 March 2026 02:50:21 +0000 (0:00:03.865) 0:07:19.464 ********* 2026-03-24 02:50:51.512665 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512675 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.512702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:50:51.512708 | orchestrator | 2026-03-24 02:50:51.512713 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-24 02:50:51.512719 | orchestrator | Tuesday 24 March 2026 02:50:25 +0000 (0:00:03.179) 0:07:22.643 ********* 2026-03-24 02:50:51.512724 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512730 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.512735 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-24 02:50:51.512742 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:50:51.512747 | orchestrator | 2026-03-24 02:50:51.512753 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-24 02:50:51.512759 | orchestrator | Tuesday 24 March 2026 02:50:37 +0000 (0:00:12.443) 0:07:35.086 ********* 2026-03-24 02:50:51.512764 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512769 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.512775 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:51.512780 | orchestrator | 2026-03-24 02:50:51.512786 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 02:50:51.512791 | orchestrator | Tuesday 24 March 2026 02:50:38 +0000 (0:00:01.109) 0:07:36.195 ********* 2026-03-24 02:50:51.512797 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512802 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.512807 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:51.512813 | orchestrator | 2026-03-24 02:50:51.512818 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 02:50:51.512846 | orchestrator | Tuesday 24 March 2026 02:50:38 +0000 (0:00:00.330) 0:07:36.526 ********* 2026-03-24 02:50:51.512857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:50:51.512863 | orchestrator | 2026-03-24 02:50:51.512869 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-24 02:50:51.512874 | orchestrator | Tuesday 24 March 2026 02:50:39 +0000 (0:00:00.763) 0:07:37.289 ********* 2026-03-24 02:50:51.512880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:50:51.512885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:50:51.512891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:50:51.512896 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512901 | orchestrator | 2026-03-24 02:50:51.512907 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-24 02:50:51.512912 | orchestrator | Tuesday 24 March 2026 02:50:40 +0000 (0:00:00.416) 0:07:37.706 ********* 2026-03-24 02:50:51.512917 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512923 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.512928 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:51.512933 | orchestrator | 2026-03-24 02:50:51.512939 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-24 02:50:51.512945 | orchestrator | Tuesday 24 March 2026 02:50:40 +0000 (0:00:00.312) 0:07:38.018 ********* 2026-03-24 02:50:51.512950 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512955 | orchestrator | 2026-03-24 02:50:51.512961 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-24 02:50:51.512966 | orchestrator | Tuesday 24 March 2026 02:50:40 +0000 (0:00:00.213) 0:07:38.232 ********* 2026-03-24 02:50:51.512971 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.512978 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.512984 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:51.512990 | orchestrator | 2026-03-24 02:50:51.512997 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-24 02:50:51.513003 | orchestrator | Tuesday 24 March 2026 02:50:41 +0000 (0:00:00.523) 0:07:38.756 ********* 2026-03-24 02:50:51.513014 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513020 | orchestrator | 2026-03-24 02:50:51.513038 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-24 02:50:51.513044 | orchestrator | Tuesday 24 March 2026 02:50:41 +0000 (0:00:00.227) 0:07:38.983 ********* 2026-03-24 02:50:51.513050 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513057 | orchestrator | 2026-03-24 02:50:51.513063 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-24 02:50:51.513082 | orchestrator | Tuesday 24 March 2026 02:50:41 +0000 (0:00:00.224) 0:07:39.207 ********* 2026-03-24 02:50:51.513089 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513095 | orchestrator | 2026-03-24 02:50:51.513102 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-24 02:50:51.513108 | orchestrator | Tuesday 24 March 2026 02:50:41 +0000 (0:00:00.134) 0:07:39.342 ********* 2026-03-24 02:50:51.513114 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513120 | orchestrator | 2026-03-24 02:50:51.513126 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-24 02:50:51.513132 | orchestrator | Tuesday 24 March 2026 02:50:42 +0000 (0:00:00.223) 0:07:39.566 ********* 2026-03-24 02:50:51.513138 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513144 | orchestrator | 2026-03-24 02:50:51.513151 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-24 02:50:51.513157 | orchestrator | Tuesday 24 March 2026 02:50:42 +0000 (0:00:00.230) 0:07:39.797 ********* 2026-03-24 02:50:51.513163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:50:51.513170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:50:51.513176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:50:51.513182 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513188 | orchestrator | 2026-03-24 02:50:51.513195 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-24 02:50:51.513201 | orchestrator | Tuesday 24 March 2026 02:50:42 +0000 (0:00:00.381) 0:07:40.178 ********* 2026-03-24 02:50:51.513207 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513213 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.513219 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:51.513226 | orchestrator | 2026-03-24 02:50:51.513232 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-24 02:50:51.513238 | orchestrator | Tuesday 24 March 2026 02:50:42 +0000 (0:00:00.309) 0:07:40.487 ********* 2026-03-24 02:50:51.513244 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513250 | orchestrator | 2026-03-24 02:50:51.513256 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-24 02:50:51.513263 | orchestrator | Tuesday 24 March 2026 02:50:43 +0000 (0:00:00.218) 0:07:40.706 ********* 2026-03-24 02:50:51.513268 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513275 | orchestrator | 2026-03-24 02:50:51.513281 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-24 02:50:51.513287 | orchestrator | 2026-03-24 02:50:51.513294 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 02:50:51.513300 | orchestrator | Tuesday 24 March 2026 02:50:44 +0000 (0:00:01.130) 0:07:41.836 ********* 2026-03-24 02:50:51.513306 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:50:51.513313 | orchestrator | 2026-03-24 02:50:51.513320 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 02:50:51.513326 | orchestrator | Tuesday 24 March 2026 02:50:45 +0000 (0:00:01.176) 0:07:43.012 ********* 2026-03-24 02:50:51.513332 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:50:51.513342 | orchestrator | 2026-03-24 02:50:51.513349 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 02:50:51.513355 | orchestrator | Tuesday 24 March 2026 02:50:46 +0000 (0:00:01.213) 0:07:44.226 ********* 2026-03-24 02:50:51.513361 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513366 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.513372 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:51.513377 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:50:51.513383 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:50:51.513388 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:50:51.513393 | orchestrator | 2026-03-24 02:50:51.513399 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 02:50:51.513404 | orchestrator | Tuesday 24 March 2026 02:50:47 +0000 (0:00:01.212) 0:07:45.439 ********* 2026-03-24 02:50:51.513409 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:50:51.513415 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:50:51.513420 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:50:51.513425 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:50:51.513431 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:50:51.513436 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:50:51.513441 | orchestrator | 2026-03-24 02:50:51.513447 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 02:50:51.513452 | orchestrator | Tuesday 24 March 2026 02:50:48 +0000 (0:00:00.726) 0:07:46.165 ********* 2026-03-24 02:50:51.513458 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:50:51.513463 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:50:51.513468 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:50:51.513474 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:50:51.513479 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:50:51.513484 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:50:51.513490 | orchestrator | 2026-03-24 02:50:51.513495 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 02:50:51.513500 | orchestrator | Tuesday 24 March 2026 02:50:49 +0000 (0:00:00.824) 0:07:46.990 ********* 2026-03-24 02:50:51.513506 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:50:51.513511 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:50:51.513516 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:50:51.513522 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:50:51.513527 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:50:51.513532 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:50:51.513538 | orchestrator | 2026-03-24 02:50:51.513547 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 02:50:51.513552 | orchestrator | Tuesday 24 March 2026 02:50:50 +0000 (0:00:00.726) 0:07:47.717 ********* 2026-03-24 02:50:51.513558 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:50:51.513563 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:50:51.513569 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:50:51.513578 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.440120 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:18.440226 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:18.440236 | orchestrator | 2026-03-24 02:51:18.440245 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 02:51:18.440254 | orchestrator | Tuesday 24 March 2026 02:50:51 +0000 (0:00:01.343) 0:07:49.061 ********* 2026-03-24 02:51:18.440262 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:18.440270 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:18.440281 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:18.440291 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.440302 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.440312 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.440322 | orchestrator | 2026-03-24 02:51:18.440333 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 02:51:18.440343 | orchestrator | Tuesday 24 March 2026 02:50:52 +0000 (0:00:00.588) 0:07:49.649 ********* 2026-03-24 02:51:18.440353 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:18.440387 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:18.440396 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:18.440406 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.440415 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.440424 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.440435 | orchestrator | 2026-03-24 02:51:18.440445 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 02:51:18.440455 | orchestrator | Tuesday 24 March 2026 02:50:52 +0000 (0:00:00.751) 0:07:50.401 ********* 2026-03-24 02:51:18.440465 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.440475 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.440484 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:18.440495 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.440505 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:18.440515 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:18.440525 | orchestrator | 2026-03-24 02:51:18.440536 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 02:51:18.440547 | orchestrator | Tuesday 24 March 2026 02:50:53 +0000 (0:00:00.993) 0:07:51.394 ********* 2026-03-24 02:51:18.440559 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.440570 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.440581 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:18.440592 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.440603 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:18.440610 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:18.440616 | orchestrator | 2026-03-24 02:51:18.440623 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 02:51:18.440630 | orchestrator | Tuesday 24 March 2026 02:50:55 +0000 (0:00:01.290) 0:07:52.685 ********* 2026-03-24 02:51:18.440637 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:18.440644 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:18.440651 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:18.440658 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.440665 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.440673 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.440680 | orchestrator | 2026-03-24 02:51:18.440688 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 02:51:18.440696 | orchestrator | Tuesday 24 March 2026 02:50:55 +0000 (0:00:00.597) 0:07:53.282 ********* 2026-03-24 02:51:18.440704 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:18.440711 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:18.440719 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:18.440726 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.440734 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:18.440741 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:18.440749 | orchestrator | 2026-03-24 02:51:18.440757 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 02:51:18.440765 | orchestrator | Tuesday 24 March 2026 02:50:56 +0000 (0:00:00.830) 0:07:54.113 ********* 2026-03-24 02:51:18.440773 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.440780 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.440788 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:18.440796 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.440803 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.440811 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.440818 | orchestrator | 2026-03-24 02:51:18.440826 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 02:51:18.440855 | orchestrator | Tuesday 24 March 2026 02:50:57 +0000 (0:00:00.615) 0:07:54.729 ********* 2026-03-24 02:51:18.440863 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.440871 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.440878 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:18.440886 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.440893 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.440909 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.440916 | orchestrator | 2026-03-24 02:51:18.440924 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 02:51:18.440931 | orchestrator | Tuesday 24 March 2026 02:50:57 +0000 (0:00:00.800) 0:07:55.529 ********* 2026-03-24 02:51:18.440939 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.440946 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.440953 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:18.440961 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.440969 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.440976 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.440984 | orchestrator | 2026-03-24 02:51:18.440991 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 02:51:18.440999 | orchestrator | Tuesday 24 March 2026 02:50:58 +0000 (0:00:00.597) 0:07:56.127 ********* 2026-03-24 02:51:18.441006 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:18.441014 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:18.441021 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:18.441028 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.441036 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.441044 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.441051 | orchestrator | 2026-03-24 02:51:18.441060 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 02:51:18.441073 | orchestrator | Tuesday 24 March 2026 02:50:59 +0000 (0:00:00.791) 0:07:56.919 ********* 2026-03-24 02:51:18.441084 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:18.441096 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:18.441127 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:18.441139 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:51:18.441150 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:51:18.441161 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:51:18.441171 | orchestrator | 2026-03-24 02:51:18.441178 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 02:51:18.441185 | orchestrator | Tuesday 24 March 2026 02:50:59 +0000 (0:00:00.577) 0:07:57.496 ********* 2026-03-24 02:51:18.441192 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:18.441198 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:18.441205 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:18.441212 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.441219 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:18.441225 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:18.441232 | orchestrator | 2026-03-24 02:51:18.441239 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 02:51:18.441245 | orchestrator | Tuesday 24 March 2026 02:51:00 +0000 (0:00:00.807) 0:07:58.304 ********* 2026-03-24 02:51:18.441252 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.441259 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.441302 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:18.441310 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.441316 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:18.441323 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:18.441330 | orchestrator | 2026-03-24 02:51:18.441336 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 02:51:18.441343 | orchestrator | Tuesday 24 March 2026 02:51:01 +0000 (0:00:00.608) 0:07:58.912 ********* 2026-03-24 02:51:18.441350 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.441356 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.441363 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:18.441370 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.441376 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:18.441383 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:18.441390 | orchestrator | 2026-03-24 02:51:18.441396 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-24 02:51:18.441403 | orchestrator | Tuesday 24 March 2026 02:51:02 +0000 (0:00:01.238) 0:08:00.151 ********* 2026-03-24 02:51:18.441416 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:51:18.441423 | orchestrator | 2026-03-24 02:51:18.441430 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-24 02:51:18.441437 | orchestrator | Tuesday 24 March 2026 02:51:06 +0000 (0:00:03.952) 0:08:04.104 ********* 2026-03-24 02:51:18.441443 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:51:18.441450 | orchestrator | 2026-03-24 02:51:18.441457 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-24 02:51:18.441464 | orchestrator | Tuesday 24 March 2026 02:51:09 +0000 (0:00:02.494) 0:08:06.598 ********* 2026-03-24 02:51:18.441470 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:51:18.441477 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:51:18.441484 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:51:18.441490 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:18.441497 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:51:18.441503 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:51:18.441510 | orchestrator | 2026-03-24 02:51:18.441517 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-24 02:51:18.441523 | orchestrator | Tuesday 24 March 2026 02:51:10 +0000 (0:00:01.384) 0:08:07.983 ********* 2026-03-24 02:51:18.441530 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:51:18.441536 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:51:18.441543 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:51:18.441549 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:51:18.441556 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:51:18.441563 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:51:18.441569 | orchestrator | 2026-03-24 02:51:18.441576 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-24 02:51:18.441582 | orchestrator | Tuesday 24 March 2026 02:51:11 +0000 (0:00:01.030) 0:08:09.014 ********* 2026-03-24 02:51:18.441590 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:51:18.441598 | orchestrator | 2026-03-24 02:51:18.441605 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-24 02:51:18.441611 | orchestrator | Tuesday 24 March 2026 02:51:12 +0000 (0:00:01.025) 0:08:10.039 ********* 2026-03-24 02:51:18.441618 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:51:18.441624 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:51:18.441631 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:51:18.441638 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:51:18.441644 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:51:18.441651 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:51:18.441657 | orchestrator | 2026-03-24 02:51:18.441664 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-24 02:51:18.441670 | orchestrator | Tuesday 24 March 2026 02:51:13 +0000 (0:00:01.398) 0:08:11.438 ********* 2026-03-24 02:51:18.441677 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:51:18.441683 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:51:18.441690 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:51:18.441696 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:51:18.441703 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:51:18.441710 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:51:18.441716 | orchestrator | 2026-03-24 02:51:18.441723 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-24 02:51:18.441729 | orchestrator | Tuesday 24 March 2026 02:51:17 +0000 (0:00:03.174) 0:08:14.613 ********* 2026-03-24 02:51:18.441741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:51:18.441748 | orchestrator | 2026-03-24 02:51:18.441754 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-24 02:51:18.441766 | orchestrator | Tuesday 24 March 2026 02:51:18 +0000 (0:00:01.038) 0:08:15.651 ********* 2026-03-24 02:51:18.441772 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:18.441779 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:18.441791 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.454804 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:43.454971 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:43.454985 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:43.454993 | orchestrator | 2026-03-24 02:51:43.455002 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-24 02:51:43.455011 | orchestrator | Tuesday 24 March 2026 02:51:18 +0000 (0:00:00.541) 0:08:16.193 ********* 2026-03-24 02:51:43.455017 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:51:43.455024 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:51:43.455031 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:51:43.455037 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:51:43.455044 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:51:43.455050 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:51:43.455057 | orchestrator | 2026-03-24 02:51:43.455063 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-24 02:51:43.455070 | orchestrator | Tuesday 24 March 2026 02:51:20 +0000 (0:00:02.366) 0:08:18.560 ********* 2026-03-24 02:51:43.455077 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455083 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455089 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455095 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:51:43.455102 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:51:43.455108 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:51:43.455114 | orchestrator | 2026-03-24 02:51:43.455120 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-24 02:51:43.455126 | orchestrator | 2026-03-24 02:51:43.455133 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 02:51:43.455140 | orchestrator | Tuesday 24 March 2026 02:51:21 +0000 (0:00:00.830) 0:08:19.390 ********* 2026-03-24 02:51:43.455147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:51:43.455155 | orchestrator | 2026-03-24 02:51:43.455163 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 02:51:43.455169 | orchestrator | Tuesday 24 March 2026 02:51:22 +0000 (0:00:00.708) 0:08:20.099 ********* 2026-03-24 02:51:43.455176 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:51:43.455182 | orchestrator | 2026-03-24 02:51:43.455189 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 02:51:43.455195 | orchestrator | Tuesday 24 March 2026 02:51:23 +0000 (0:00:00.491) 0:08:20.590 ********* 2026-03-24 02:51:43.455202 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455210 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455216 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455223 | orchestrator | 2026-03-24 02:51:43.455229 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 02:51:43.455236 | orchestrator | Tuesday 24 March 2026 02:51:23 +0000 (0:00:00.492) 0:08:21.083 ********* 2026-03-24 02:51:43.455243 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455249 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455255 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455262 | orchestrator | 2026-03-24 02:51:43.455269 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 02:51:43.455276 | orchestrator | Tuesday 24 March 2026 02:51:24 +0000 (0:00:00.705) 0:08:21.789 ********* 2026-03-24 02:51:43.455282 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455288 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455295 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455301 | orchestrator | 2026-03-24 02:51:43.455307 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 02:51:43.455338 | orchestrator | Tuesday 24 March 2026 02:51:24 +0000 (0:00:00.716) 0:08:22.506 ********* 2026-03-24 02:51:43.455345 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455351 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455356 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455362 | orchestrator | 2026-03-24 02:51:43.455368 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 02:51:43.455375 | orchestrator | Tuesday 24 March 2026 02:51:25 +0000 (0:00:00.955) 0:08:23.461 ********* 2026-03-24 02:51:43.455381 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455387 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455394 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455401 | orchestrator | 2026-03-24 02:51:43.455408 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 02:51:43.455416 | orchestrator | Tuesday 24 March 2026 02:51:26 +0000 (0:00:00.281) 0:08:23.743 ********* 2026-03-24 02:51:43.455422 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455428 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455435 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455441 | orchestrator | 2026-03-24 02:51:43.455448 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 02:51:43.455454 | orchestrator | Tuesday 24 March 2026 02:51:26 +0000 (0:00:00.301) 0:08:24.045 ********* 2026-03-24 02:51:43.455460 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455466 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455473 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455478 | orchestrator | 2026-03-24 02:51:43.455485 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 02:51:43.455491 | orchestrator | Tuesday 24 March 2026 02:51:26 +0000 (0:00:00.302) 0:08:24.348 ********* 2026-03-24 02:51:43.455498 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455504 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455511 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455517 | orchestrator | 2026-03-24 02:51:43.455523 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 02:51:43.455544 | orchestrator | Tuesday 24 March 2026 02:51:27 +0000 (0:00:00.913) 0:08:25.261 ********* 2026-03-24 02:51:43.455550 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455556 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455562 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455568 | orchestrator | 2026-03-24 02:51:43.455574 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 02:51:43.455580 | orchestrator | Tuesday 24 March 2026 02:51:28 +0000 (0:00:00.727) 0:08:25.989 ********* 2026-03-24 02:51:43.455604 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455611 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455617 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455623 | orchestrator | 2026-03-24 02:51:43.455630 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 02:51:43.455636 | orchestrator | Tuesday 24 March 2026 02:51:28 +0000 (0:00:00.304) 0:08:26.293 ********* 2026-03-24 02:51:43.455643 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455649 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455655 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455660 | orchestrator | 2026-03-24 02:51:43.455667 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 02:51:43.455674 | orchestrator | Tuesday 24 March 2026 02:51:29 +0000 (0:00:00.292) 0:08:26.586 ********* 2026-03-24 02:51:43.455680 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455686 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455692 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455698 | orchestrator | 2026-03-24 02:51:43.455705 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 02:51:43.455711 | orchestrator | Tuesday 24 March 2026 02:51:29 +0000 (0:00:00.541) 0:08:27.127 ********* 2026-03-24 02:51:43.455725 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455732 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455738 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455744 | orchestrator | 2026-03-24 02:51:43.455750 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 02:51:43.455756 | orchestrator | Tuesday 24 March 2026 02:51:29 +0000 (0:00:00.320) 0:08:27.447 ********* 2026-03-24 02:51:43.455762 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455768 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455774 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455780 | orchestrator | 2026-03-24 02:51:43.455787 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 02:51:43.455794 | orchestrator | Tuesday 24 March 2026 02:51:30 +0000 (0:00:00.319) 0:08:27.766 ********* 2026-03-24 02:51:43.455800 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455806 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455812 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455818 | orchestrator | 2026-03-24 02:51:43.455824 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 02:51:43.455830 | orchestrator | Tuesday 24 March 2026 02:51:30 +0000 (0:00:00.281) 0:08:28.048 ********* 2026-03-24 02:51:43.455865 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455872 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455878 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455884 | orchestrator | 2026-03-24 02:51:43.455890 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 02:51:43.455896 | orchestrator | Tuesday 24 March 2026 02:51:30 +0000 (0:00:00.510) 0:08:28.558 ********* 2026-03-24 02:51:43.455901 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.455907 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.455912 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.455918 | orchestrator | 2026-03-24 02:51:43.455924 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 02:51:43.455930 | orchestrator | Tuesday 24 March 2026 02:51:31 +0000 (0:00:00.293) 0:08:28.852 ********* 2026-03-24 02:51:43.455936 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455942 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455948 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455954 | orchestrator | 2026-03-24 02:51:43.455959 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 02:51:43.455962 | orchestrator | Tuesday 24 March 2026 02:51:31 +0000 (0:00:00.319) 0:08:29.172 ********* 2026-03-24 02:51:43.455966 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:51:43.455970 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:51:43.455976 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:51:43.455982 | orchestrator | 2026-03-24 02:51:43.455987 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-24 02:51:43.455993 | orchestrator | Tuesday 24 March 2026 02:51:32 +0000 (0:00:00.721) 0:08:29.893 ********* 2026-03-24 02:51:43.455999 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:51:43.456006 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:51:43.456012 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-24 02:51:43.456019 | orchestrator | 2026-03-24 02:51:43.456025 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-24 02:51:43.456031 | orchestrator | Tuesday 24 March 2026 02:51:32 +0000 (0:00:00.394) 0:08:30.288 ********* 2026-03-24 02:51:43.456038 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:51:43.456044 | orchestrator | 2026-03-24 02:51:43.456050 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-24 02:51:43.456057 | orchestrator | Tuesday 24 March 2026 02:51:34 +0000 (0:00:02.141) 0:08:32.429 ********* 2026-03-24 02:51:43.456065 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-24 02:51:43.456082 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:51:43.456089 | orchestrator | 2026-03-24 02:51:43.456092 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-24 02:51:43.456096 | orchestrator | Tuesday 24 March 2026 02:51:35 +0000 (0:00:00.217) 0:08:32.646 ********* 2026-03-24 02:51:43.456107 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-24 02:51:43.456126 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-24 02:52:13.329309 | orchestrator | 2026-03-24 02:52:13.329425 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-24 02:52:13.329443 | orchestrator | Tuesday 24 March 2026 02:51:43 +0000 (0:00:08.354) 0:08:41.001 ********* 2026-03-24 02:52:13.329455 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 02:52:13.329466 | orchestrator | 2026-03-24 02:52:13.329478 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-24 02:52:13.329490 | orchestrator | Tuesday 24 March 2026 02:51:47 +0000 (0:00:03.612) 0:08:44.614 ********* 2026-03-24 02:52:13.329501 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:13.329513 | orchestrator | 2026-03-24 02:52:13.329524 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-24 02:52:13.329535 | orchestrator | Tuesday 24 March 2026 02:51:47 +0000 (0:00:00.749) 0:08:45.363 ********* 2026-03-24 02:52:13.329546 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-24 02:52:13.329557 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-24 02:52:13.329573 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-24 02:52:13.329593 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-24 02:52:13.329612 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-24 02:52:13.329642 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-24 02:52:13.329662 | orchestrator | 2026-03-24 02:52:13.329681 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-24 02:52:13.329699 | orchestrator | Tuesday 24 March 2026 02:51:48 +0000 (0:00:01.044) 0:08:46.408 ********* 2026-03-24 02:52:13.329719 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:13.329739 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 02:52:13.329758 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 02:52:13.329777 | orchestrator | 2026-03-24 02:52:13.329797 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-24 02:52:13.329816 | orchestrator | Tuesday 24 March 2026 02:51:51 +0000 (0:00:02.195) 0:08:48.604 ********* 2026-03-24 02:52:13.329829 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-24 02:52:13.329876 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 02:52:13.329892 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.329905 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-24 02:52:13.329917 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-24 02:52:13.329929 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 02:52:13.329942 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.329953 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 02:52:13.329993 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.330090 | orchestrator | 2026-03-24 02:52:13.330113 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-24 02:52:13.330130 | orchestrator | Tuesday 24 March 2026 02:51:52 +0000 (0:00:01.174) 0:08:49.779 ********* 2026-03-24 02:52:13.330149 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.330167 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.330185 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.330203 | orchestrator | 2026-03-24 02:52:13.330223 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-24 02:52:13.330241 | orchestrator | Tuesday 24 March 2026 02:51:55 +0000 (0:00:02.944) 0:08:52.724 ********* 2026-03-24 02:52:13.330259 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:13.330279 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:13.330298 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:13.330318 | orchestrator | 2026-03-24 02:52:13.330336 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-24 02:52:13.330354 | orchestrator | Tuesday 24 March 2026 02:51:55 +0000 (0:00:00.313) 0:08:53.037 ********* 2026-03-24 02:52:13.330386 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:13.330404 | orchestrator | 2026-03-24 02:52:13.330422 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-24 02:52:13.330439 | orchestrator | Tuesday 24 March 2026 02:51:56 +0000 (0:00:00.790) 0:08:53.828 ********* 2026-03-24 02:52:13.330457 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:13.330473 | orchestrator | 2026-03-24 02:52:13.330489 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-24 02:52:13.330504 | orchestrator | Tuesday 24 March 2026 02:51:56 +0000 (0:00:00.525) 0:08:54.353 ********* 2026-03-24 02:52:13.330521 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.330538 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.330556 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.330573 | orchestrator | 2026-03-24 02:52:13.330590 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-24 02:52:13.330628 | orchestrator | Tuesday 24 March 2026 02:51:58 +0000 (0:00:01.310) 0:08:55.663 ********* 2026-03-24 02:52:13.330649 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.330666 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.330686 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.330698 | orchestrator | 2026-03-24 02:52:13.330709 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-24 02:52:13.330720 | orchestrator | Tuesday 24 March 2026 02:51:59 +0000 (0:00:01.393) 0:08:57.057 ********* 2026-03-24 02:52:13.330730 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.330741 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.330752 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.330762 | orchestrator | 2026-03-24 02:52:13.330796 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-24 02:52:13.330808 | orchestrator | Tuesday 24 March 2026 02:52:01 +0000 (0:00:01.907) 0:08:58.965 ********* 2026-03-24 02:52:13.330818 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.330829 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.330839 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.330876 | orchestrator | 2026-03-24 02:52:13.330887 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-24 02:52:13.330898 | orchestrator | Tuesday 24 March 2026 02:52:03 +0000 (0:00:02.021) 0:09:00.986 ********* 2026-03-24 02:52:13.330909 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:13.330920 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:13.330930 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:13.330941 | orchestrator | 2026-03-24 02:52:13.330952 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 02:52:13.330977 | orchestrator | Tuesday 24 March 2026 02:52:04 +0000 (0:00:01.424) 0:09:02.411 ********* 2026-03-24 02:52:13.330987 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.330998 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.331008 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.331019 | orchestrator | 2026-03-24 02:52:13.331030 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 02:52:13.331040 | orchestrator | Tuesday 24 March 2026 02:52:05 +0000 (0:00:00.677) 0:09:03.088 ********* 2026-03-24 02:52:13.331051 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:13.331062 | orchestrator | 2026-03-24 02:52:13.331072 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-24 02:52:13.331083 | orchestrator | Tuesday 24 March 2026 02:52:06 +0000 (0:00:00.711) 0:09:03.799 ********* 2026-03-24 02:52:13.331094 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:13.331104 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:13.331115 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:13.331125 | orchestrator | 2026-03-24 02:52:13.331136 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-24 02:52:13.331146 | orchestrator | Tuesday 24 March 2026 02:52:06 +0000 (0:00:00.303) 0:09:04.103 ********* 2026-03-24 02:52:13.331157 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:13.331168 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:13.331178 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:13.331188 | orchestrator | 2026-03-24 02:52:13.331199 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-24 02:52:13.331210 | orchestrator | Tuesday 24 March 2026 02:52:07 +0000 (0:00:01.314) 0:09:05.418 ********* 2026-03-24 02:52:13.331221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:52:13.331232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:52:13.331242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:52:13.331253 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:13.331264 | orchestrator | 2026-03-24 02:52:13.331274 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-24 02:52:13.331285 | orchestrator | Tuesday 24 March 2026 02:52:08 +0000 (0:00:00.831) 0:09:06.249 ********* 2026-03-24 02:52:13.331296 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:13.331306 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:13.331317 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:13.331327 | orchestrator | 2026-03-24 02:52:13.331338 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-24 02:52:13.331354 | orchestrator | 2026-03-24 02:52:13.331380 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 02:52:13.331402 | orchestrator | Tuesday 24 March 2026 02:52:09 +0000 (0:00:00.751) 0:09:07.001 ********* 2026-03-24 02:52:13.331420 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:13.331439 | orchestrator | 2026-03-24 02:52:13.331456 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 02:52:13.331474 | orchestrator | Tuesday 24 March 2026 02:52:09 +0000 (0:00:00.496) 0:09:07.497 ********* 2026-03-24 02:52:13.331492 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:13.331509 | orchestrator | 2026-03-24 02:52:13.331528 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 02:52:13.331545 | orchestrator | Tuesday 24 March 2026 02:52:10 +0000 (0:00:00.728) 0:09:08.225 ********* 2026-03-24 02:52:13.331564 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:13.331584 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:13.331603 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:13.331634 | orchestrator | 2026-03-24 02:52:13.331646 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 02:52:13.331657 | orchestrator | Tuesday 24 March 2026 02:52:10 +0000 (0:00:00.307) 0:09:08.532 ********* 2026-03-24 02:52:13.331667 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:13.331678 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:13.331689 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:13.331699 | orchestrator | 2026-03-24 02:52:13.331710 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 02:52:13.331720 | orchestrator | Tuesday 24 March 2026 02:52:11 +0000 (0:00:00.698) 0:09:09.230 ********* 2026-03-24 02:52:13.331731 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:13.331750 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:13.331760 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:13.331771 | orchestrator | 2026-03-24 02:52:13.331782 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 02:52:13.331792 | orchestrator | Tuesday 24 March 2026 02:52:12 +0000 (0:00:00.920) 0:09:10.151 ********* 2026-03-24 02:52:13.331803 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:13.331814 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:13.331824 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:13.331835 | orchestrator | 2026-03-24 02:52:13.331876 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 02:52:13.331899 | orchestrator | Tuesday 24 March 2026 02:52:13 +0000 (0:00:00.723) 0:09:10.874 ********* 2026-03-24 02:52:34.376758 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.376938 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.376953 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.376961 | orchestrator | 2026-03-24 02:52:34.376970 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 02:52:34.376979 | orchestrator | Tuesday 24 March 2026 02:52:13 +0000 (0:00:00.304) 0:09:11.179 ********* 2026-03-24 02:52:34.376988 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.376997 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377005 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377012 | orchestrator | 2026-03-24 02:52:34.377022 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 02:52:34.377030 | orchestrator | Tuesday 24 March 2026 02:52:13 +0000 (0:00:00.298) 0:09:11.478 ********* 2026-03-24 02:52:34.377039 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.377046 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377052 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377059 | orchestrator | 2026-03-24 02:52:34.377066 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 02:52:34.377074 | orchestrator | Tuesday 24 March 2026 02:52:14 +0000 (0:00:00.512) 0:09:11.990 ********* 2026-03-24 02:52:34.377080 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:34.377088 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:34.377096 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:34.377102 | orchestrator | 2026-03-24 02:52:34.377109 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 02:52:34.377117 | orchestrator | Tuesday 24 March 2026 02:52:15 +0000 (0:00:00.743) 0:09:12.734 ********* 2026-03-24 02:52:34.377124 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:34.377132 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:34.377138 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:34.377145 | orchestrator | 2026-03-24 02:52:34.377152 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 02:52:34.377160 | orchestrator | Tuesday 24 March 2026 02:52:15 +0000 (0:00:00.716) 0:09:13.450 ********* 2026-03-24 02:52:34.377176 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.377185 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377192 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377200 | orchestrator | 2026-03-24 02:52:34.377208 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 02:52:34.377241 | orchestrator | Tuesday 24 March 2026 02:52:16 +0000 (0:00:00.319) 0:09:13.770 ********* 2026-03-24 02:52:34.377249 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.377255 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377262 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377269 | orchestrator | 2026-03-24 02:52:34.377277 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 02:52:34.377285 | orchestrator | Tuesday 24 March 2026 02:52:16 +0000 (0:00:00.557) 0:09:14.328 ********* 2026-03-24 02:52:34.377292 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:34.377300 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:34.377308 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:34.377315 | orchestrator | 2026-03-24 02:52:34.377322 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 02:52:34.377328 | orchestrator | Tuesday 24 March 2026 02:52:17 +0000 (0:00:00.322) 0:09:14.651 ********* 2026-03-24 02:52:34.377333 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:34.377339 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:34.377344 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:34.377349 | orchestrator | 2026-03-24 02:52:34.377354 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 02:52:34.377360 | orchestrator | Tuesday 24 March 2026 02:52:17 +0000 (0:00:00.334) 0:09:14.985 ********* 2026-03-24 02:52:34.377365 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:34.377371 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:34.377378 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:34.377385 | orchestrator | 2026-03-24 02:52:34.377392 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 02:52:34.377399 | orchestrator | Tuesday 24 March 2026 02:52:17 +0000 (0:00:00.317) 0:09:15.303 ********* 2026-03-24 02:52:34.377406 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.377414 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377422 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377429 | orchestrator | 2026-03-24 02:52:34.377437 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 02:52:34.377446 | orchestrator | Tuesday 24 March 2026 02:52:18 +0000 (0:00:00.516) 0:09:15.819 ********* 2026-03-24 02:52:34.377454 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.377463 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377470 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377478 | orchestrator | 2026-03-24 02:52:34.377486 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 02:52:34.377494 | orchestrator | Tuesday 24 March 2026 02:52:18 +0000 (0:00:00.307) 0:09:16.127 ********* 2026-03-24 02:52:34.377503 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.377510 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377518 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377525 | orchestrator | 2026-03-24 02:52:34.377531 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 02:52:34.377538 | orchestrator | Tuesday 24 March 2026 02:52:18 +0000 (0:00:00.291) 0:09:16.419 ********* 2026-03-24 02:52:34.377546 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:34.377554 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:34.377561 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:34.377570 | orchestrator | 2026-03-24 02:52:34.377592 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 02:52:34.377600 | orchestrator | Tuesday 24 March 2026 02:52:19 +0000 (0:00:00.322) 0:09:16.742 ********* 2026-03-24 02:52:34.377608 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:52:34.377616 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:52:34.377623 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:52:34.377631 | orchestrator | 2026-03-24 02:52:34.377639 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-24 02:52:34.377647 | orchestrator | Tuesday 24 March 2026 02:52:19 +0000 (0:00:00.770) 0:09:17.512 ********* 2026-03-24 02:52:34.377675 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:34.377682 | orchestrator | 2026-03-24 02:52:34.377686 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 02:52:34.377691 | orchestrator | Tuesday 24 March 2026 02:52:20 +0000 (0:00:00.501) 0:09:18.013 ********* 2026-03-24 02:52:34.377696 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:34.377700 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 02:52:34.377705 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 02:52:34.377710 | orchestrator | 2026-03-24 02:52:34.377715 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 02:52:34.377719 | orchestrator | Tuesday 24 March 2026 02:52:22 +0000 (0:00:02.424) 0:09:20.437 ********* 2026-03-24 02:52:34.377724 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-24 02:52:34.377729 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 02:52:34.377734 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:34.377738 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-24 02:52:34.377743 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 02:52:34.377747 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:34.377752 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-24 02:52:34.377757 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 02:52:34.377761 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:52:34.377766 | orchestrator | 2026-03-24 02:52:34.377770 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-24 02:52:34.377775 | orchestrator | Tuesday 24 March 2026 02:52:24 +0000 (0:00:01.425) 0:09:21.863 ********* 2026-03-24 02:52:34.377779 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:52:34.377784 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:52:34.377788 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:52:34.377793 | orchestrator | 2026-03-24 02:52:34.377845 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-24 02:52:34.377854 | orchestrator | Tuesday 24 March 2026 02:52:24 +0000 (0:00:00.293) 0:09:22.156 ********* 2026-03-24 02:52:34.377862 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:52:34.377870 | orchestrator | 2026-03-24 02:52:34.377877 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-24 02:52:34.377884 | orchestrator | Tuesday 24 March 2026 02:52:25 +0000 (0:00:00.517) 0:09:22.674 ********* 2026-03-24 02:52:34.377892 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 02:52:34.377899 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 02:52:34.377904 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 02:52:34.377908 | orchestrator | 2026-03-24 02:52:34.377915 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-24 02:52:34.377922 | orchestrator | Tuesday 24 March 2026 02:52:26 +0000 (0:00:01.088) 0:09:23.762 ********* 2026-03-24 02:52:34.377931 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:34.377938 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-24 02:52:34.377946 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:34.377954 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-24 02:52:34.377965 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:34.377969 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-24 02:52:34.377974 | orchestrator | 2026-03-24 02:52:34.377979 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 02:52:34.377983 | orchestrator | Tuesday 24 March 2026 02:52:30 +0000 (0:00:04.423) 0:09:28.186 ********* 2026-03-24 02:52:34.377988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:34.377994 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 02:52:34.378001 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:34.378008 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 02:52:34.378067 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:52:34.378078 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 02:52:34.378083 | orchestrator | 2026-03-24 02:52:34.378087 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 02:52:34.378092 | orchestrator | Tuesday 24 March 2026 02:52:32 +0000 (0:00:02.312) 0:09:30.498 ********* 2026-03-24 02:52:34.378097 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-24 02:52:34.378101 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:52:34.378106 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-24 02:52:34.378110 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:52:34.378115 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-24 02:52:34.378126 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:53:18.343138 | orchestrator | 2026-03-24 02:53:18.343250 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-24 02:53:18.343267 | orchestrator | Tuesday 24 March 2026 02:52:34 +0000 (0:00:01.418) 0:09:31.916 ********* 2026-03-24 02:53:18.343278 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-24 02:53:18.343288 | orchestrator | 2026-03-24 02:53:18.343299 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-24 02:53:18.343309 | orchestrator | Tuesday 24 March 2026 02:52:34 +0000 (0:00:00.228) 0:09:32.145 ********* 2026-03-24 02:53:18.343319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343341 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343370 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:18.343381 | orchestrator | 2026-03-24 02:53:18.343391 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-24 02:53:18.343400 | orchestrator | Tuesday 24 March 2026 02:52:35 +0000 (0:00:00.565) 0:09:32.710 ********* 2026-03-24 02:53:18.343410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 02:53:18.343483 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:18.343493 | orchestrator | 2026-03-24 02:53:18.343503 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-24 02:53:18.343512 | orchestrator | Tuesday 24 March 2026 02:52:35 +0000 (0:00:00.593) 0:09:33.304 ********* 2026-03-24 02:53:18.343522 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 02:53:18.343533 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 02:53:18.343543 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 02:53:18.343552 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 02:53:18.343562 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 02:53:18.343571 | orchestrator | 2026-03-24 02:53:18.343608 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-24 02:53:18.343618 | orchestrator | Tuesday 24 March 2026 02:53:05 +0000 (0:00:30.222) 0:10:03.526 ********* 2026-03-24 02:53:18.343628 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:18.343637 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:18.343647 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:18.343656 | orchestrator | 2026-03-24 02:53:18.343666 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-24 02:53:18.343676 | orchestrator | Tuesday 24 March 2026 02:53:06 +0000 (0:00:00.292) 0:10:03.818 ********* 2026-03-24 02:53:18.343687 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:18.343698 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:18.343709 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:18.343720 | orchestrator | 2026-03-24 02:53:18.343765 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-24 02:53:18.343801 | orchestrator | Tuesday 24 March 2026 02:53:06 +0000 (0:00:00.300) 0:10:04.119 ********* 2026-03-24 02:53:18.343813 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:53:18.343823 | orchestrator | 2026-03-24 02:53:18.343833 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-24 02:53:18.343843 | orchestrator | Tuesday 24 March 2026 02:53:07 +0000 (0:00:00.723) 0:10:04.843 ********* 2026-03-24 02:53:18.343869 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:53:18.343880 | orchestrator | 2026-03-24 02:53:18.343895 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-24 02:53:18.343912 | orchestrator | Tuesday 24 March 2026 02:53:07 +0000 (0:00:00.515) 0:10:05.358 ********* 2026-03-24 02:53:18.343928 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:53:18.343964 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:53:18.343992 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:53:18.344011 | orchestrator | 2026-03-24 02:53:18.344027 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-24 02:53:18.344042 | orchestrator | Tuesday 24 March 2026 02:53:09 +0000 (0:00:01.557) 0:10:06.915 ********* 2026-03-24 02:53:18.344071 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:53:18.344087 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:53:18.344104 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:53:18.344122 | orchestrator | 2026-03-24 02:53:18.344139 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-24 02:53:18.344156 | orchestrator | Tuesday 24 March 2026 02:53:10 +0000 (0:00:01.275) 0:10:08.190 ********* 2026-03-24 02:53:18.344166 | orchestrator | changed: [testbed-node-3] 2026-03-24 02:53:18.344176 | orchestrator | changed: [testbed-node-5] 2026-03-24 02:53:18.344185 | orchestrator | changed: [testbed-node-4] 2026-03-24 02:53:18.344195 | orchestrator | 2026-03-24 02:53:18.344204 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-24 02:53:18.344214 | orchestrator | Tuesday 24 March 2026 02:53:12 +0000 (0:00:01.878) 0:10:10.069 ********* 2026-03-24 02:53:18.344223 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 02:53:18.344233 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 02:53:18.344243 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 02:53:18.344253 | orchestrator | 2026-03-24 02:53:18.344263 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 02:53:18.344272 | orchestrator | Tuesday 24 March 2026 02:53:15 +0000 (0:00:02.655) 0:10:12.724 ********* 2026-03-24 02:53:18.344282 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:18.344291 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:18.344301 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:18.344311 | orchestrator | 2026-03-24 02:53:18.344320 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 02:53:18.344330 | orchestrator | Tuesday 24 March 2026 02:53:15 +0000 (0:00:00.348) 0:10:13.073 ********* 2026-03-24 02:53:18.344345 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:53:18.344359 | orchestrator | 2026-03-24 02:53:18.344375 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-24 02:53:18.344390 | orchestrator | Tuesday 24 March 2026 02:53:16 +0000 (0:00:00.764) 0:10:13.837 ********* 2026-03-24 02:53:18.344406 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:18.344423 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:18.344438 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:18.344454 | orchestrator | 2026-03-24 02:53:18.344470 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-24 02:53:18.344487 | orchestrator | Tuesday 24 March 2026 02:53:16 +0000 (0:00:00.376) 0:10:14.213 ********* 2026-03-24 02:53:18.344498 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:18.344508 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:18.344517 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:18.344527 | orchestrator | 2026-03-24 02:53:18.344537 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-24 02:53:18.344547 | orchestrator | Tuesday 24 March 2026 02:53:16 +0000 (0:00:00.324) 0:10:14.537 ********* 2026-03-24 02:53:18.344557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:53:18.344566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:53:18.344724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:53:18.344748 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:18.344758 | orchestrator | 2026-03-24 02:53:18.344768 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-24 02:53:18.344778 | orchestrator | Tuesday 24 March 2026 02:53:17 +0000 (0:00:00.837) 0:10:15.375 ********* 2026-03-24 02:53:18.344788 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:18.344798 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:18.344819 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:18.344828 | orchestrator | 2026-03-24 02:53:18.344838 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:53:18.344848 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-24 02:53:18.344867 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-24 02:53:18.344877 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-24 02:53:18.344887 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-24 02:53:18.344911 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-24 02:53:18.718295 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-24 02:53:18.718387 | orchestrator | 2026-03-24 02:53:18.718402 | orchestrator | 2026-03-24 02:53:18.718411 | orchestrator | 2026-03-24 02:53:18.718420 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:53:18.718430 | orchestrator | Tuesday 24 March 2026 02:53:18 +0000 (0:00:00.507) 0:10:15.882 ********* 2026-03-24 02:53:18.718438 | orchestrator | =============================================================================== 2026-03-24 02:53:18.718447 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 52.23s 2026-03-24 02:53:18.718454 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.91s 2026-03-24 02:53:18.718459 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.22s 2026-03-24 02:53:18.718464 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.21s 2026-03-24 02:53:18.718469 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.82s 2026-03-24 02:53:18.718473 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.09s 2026-03-24 02:53:18.718478 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.44s 2026-03-24 02:53:18.718483 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.55s 2026-03-24 02:53:18.718487 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.49s 2026-03-24 02:53:18.718492 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.35s 2026-03-24 02:53:18.718496 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.51s 2026-03-24 02:53:18.718501 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.41s 2026-03-24 02:53:18.718505 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.46s 2026-03-24 02:53:18.718510 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.42s 2026-03-24 02:53:18.718514 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.95s 2026-03-24 02:53:18.718519 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.87s 2026-03-24 02:53:18.718524 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.69s 2026-03-24 02:53:18.718528 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.61s 2026-03-24 02:53:18.718533 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.19s 2026-03-24 02:53:18.718537 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.18s 2026-03-24 02:53:20.974240 | orchestrator | 2026-03-24 02:53:20 | INFO  | Task 846a5e7f-63ff-46c2-94f1-bdb58767b1d9 (ceph-pools) was prepared for execution. 2026-03-24 02:53:20.974352 | orchestrator | 2026-03-24 02:53:20 | INFO  | It takes a moment until task 846a5e7f-63ff-46c2-94f1-bdb58767b1d9 (ceph-pools) has been started and output is visible here. 2026-03-24 02:53:33.690273 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-24 02:53:33.690418 | orchestrator | 2.16.14 2026-03-24 02:53:33.690446 | orchestrator | 2026-03-24 02:53:33.690466 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-24 02:53:33.690489 | orchestrator | 2026-03-24 02:53:33.690653 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 02:53:33.690669 | orchestrator | Tuesday 24 March 2026 02:53:25 +0000 (0:00:00.442) 0:00:00.442 ********* 2026-03-24 02:53:33.690681 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:53:33.690693 | orchestrator | 2026-03-24 02:53:33.690705 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 02:53:33.690716 | orchestrator | Tuesday 24 March 2026 02:53:25 +0000 (0:00:00.529) 0:00:00.971 ********* 2026-03-24 02:53:33.690727 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.690739 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.690750 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.690761 | orchestrator | 2026-03-24 02:53:33.690772 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 02:53:33.690784 | orchestrator | Tuesday 24 March 2026 02:53:26 +0000 (0:00:00.600) 0:00:01.571 ********* 2026-03-24 02:53:33.690796 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.690808 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.690820 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.690833 | orchestrator | 2026-03-24 02:53:33.690845 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 02:53:33.690858 | orchestrator | Tuesday 24 March 2026 02:53:26 +0000 (0:00:00.278) 0:00:01.850 ********* 2026-03-24 02:53:33.690870 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.690883 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.690895 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.690908 | orchestrator | 2026-03-24 02:53:33.690939 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 02:53:33.690952 | orchestrator | Tuesday 24 March 2026 02:53:27 +0000 (0:00:00.729) 0:00:02.580 ********* 2026-03-24 02:53:33.690965 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.690978 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.690990 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.691003 | orchestrator | 2026-03-24 02:53:33.691016 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 02:53:33.691029 | orchestrator | Tuesday 24 March 2026 02:53:27 +0000 (0:00:00.266) 0:00:02.847 ********* 2026-03-24 02:53:33.691042 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.691054 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.691067 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.691079 | orchestrator | 2026-03-24 02:53:33.691092 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 02:53:33.691105 | orchestrator | Tuesday 24 March 2026 02:53:27 +0000 (0:00:00.247) 0:00:03.095 ********* 2026-03-24 02:53:33.691118 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.691131 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.691143 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.691155 | orchestrator | 2026-03-24 02:53:33.691169 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 02:53:33.691181 | orchestrator | Tuesday 24 March 2026 02:53:27 +0000 (0:00:00.275) 0:00:03.371 ********* 2026-03-24 02:53:33.691193 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:33.691204 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:33.691215 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:33.691226 | orchestrator | 2026-03-24 02:53:33.691237 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 02:53:33.691274 | orchestrator | Tuesday 24 March 2026 02:53:28 +0000 (0:00:00.380) 0:00:03.751 ********* 2026-03-24 02:53:33.691285 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.691296 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.691307 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.691318 | orchestrator | 2026-03-24 02:53:33.691329 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 02:53:33.691340 | orchestrator | Tuesday 24 March 2026 02:53:28 +0000 (0:00:00.254) 0:00:04.005 ********* 2026-03-24 02:53:33.691351 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:53:33.691362 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:53:33.691373 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:53:33.691383 | orchestrator | 2026-03-24 02:53:33.691395 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 02:53:33.691407 | orchestrator | Tuesday 24 March 2026 02:53:29 +0000 (0:00:00.602) 0:00:04.607 ********* 2026-03-24 02:53:33.691423 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:33.691441 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:33.691458 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:33.691477 | orchestrator | 2026-03-24 02:53:33.691494 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 02:53:33.691543 | orchestrator | Tuesday 24 March 2026 02:53:29 +0000 (0:00:00.424) 0:00:05.032 ********* 2026-03-24 02:53:33.691562 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:53:33.691581 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:53:33.691600 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:53:33.691617 | orchestrator | 2026-03-24 02:53:33.691635 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 02:53:33.691647 | orchestrator | Tuesday 24 March 2026 02:53:31 +0000 (0:00:02.145) 0:00:07.178 ********* 2026-03-24 02:53:33.691659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 02:53:33.691672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 02:53:33.691691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 02:53:33.691707 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:33.691722 | orchestrator | 2026-03-24 02:53:33.691766 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 02:53:33.691786 | orchestrator | Tuesday 24 March 2026 02:53:32 +0000 (0:00:00.580) 0:00:07.758 ********* 2026-03-24 02:53:33.691807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 02:53:33.691830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 02:53:33.691849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 02:53:33.691863 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:33.691874 | orchestrator | 2026-03-24 02:53:33.691885 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 02:53:33.691896 | orchestrator | Tuesday 24 March 2026 02:53:33 +0000 (0:00:00.953) 0:00:08.712 ********* 2026-03-24 02:53:33.691918 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:33.691954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:33.691967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:33.691978 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:33.691989 | orchestrator | 2026-03-24 02:53:33.692000 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 02:53:33.692010 | orchestrator | Tuesday 24 March 2026 02:53:33 +0000 (0:00:00.156) 0:00:08.869 ********* 2026-03-24 02:53:33.692023 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cefde431640e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 02:53:30.489580', 'end': '2026-03-24 02:53:30.532166', 'delta': '0:00:00.042586', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cefde431640e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 02:53:33.692039 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4f8b0ade79f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 02:53:31.042066', 'end': '2026-03-24 02:53:31.072628', 'delta': '0:00:00.030562', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f8b0ade79f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 02:53:33.692060 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cce21668b5d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 02:53:31.607716', 'end': '2026-03-24 02:53:31.658314', 'delta': '0:00:00.050598', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cce21668b5d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 02:53:40.296844 | orchestrator | 2026-03-24 02:53:40.296958 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 02:53:40.296976 | orchestrator | Tuesday 24 March 2026 02:53:33 +0000 (0:00:00.193) 0:00:09.062 ********* 2026-03-24 02:53:40.297012 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:40.297025 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:40.297036 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:40.297047 | orchestrator | 2026-03-24 02:53:40.297058 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 02:53:40.297069 | orchestrator | Tuesday 24 March 2026 02:53:34 +0000 (0:00:00.448) 0:00:09.510 ********* 2026-03-24 02:53:40.297081 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-24 02:53:40.297092 | orchestrator | 2026-03-24 02:53:40.297118 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 02:53:40.297130 | orchestrator | Tuesday 24 March 2026 02:53:35 +0000 (0:00:01.741) 0:00:11.251 ********* 2026-03-24 02:53:40.297141 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297152 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297163 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297174 | orchestrator | 2026-03-24 02:53:40.297185 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 02:53:40.297196 | orchestrator | Tuesday 24 March 2026 02:53:36 +0000 (0:00:00.297) 0:00:11.549 ********* 2026-03-24 02:53:40.297206 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297217 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297228 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297239 | orchestrator | 2026-03-24 02:53:40.297250 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 02:53:40.297260 | orchestrator | Tuesday 24 March 2026 02:53:36 +0000 (0:00:00.772) 0:00:12.322 ********* 2026-03-24 02:53:40.297271 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297282 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297293 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297304 | orchestrator | 2026-03-24 02:53:40.297315 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 02:53:40.297326 | orchestrator | Tuesday 24 March 2026 02:53:37 +0000 (0:00:00.273) 0:00:12.595 ********* 2026-03-24 02:53:40.297337 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:40.297348 | orchestrator | 2026-03-24 02:53:40.297359 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 02:53:40.297369 | orchestrator | Tuesday 24 March 2026 02:53:37 +0000 (0:00:00.110) 0:00:12.706 ********* 2026-03-24 02:53:40.297380 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297391 | orchestrator | 2026-03-24 02:53:40.297402 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 02:53:40.297413 | orchestrator | Tuesday 24 March 2026 02:53:37 +0000 (0:00:00.240) 0:00:12.946 ********* 2026-03-24 02:53:40.297424 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297435 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297445 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297456 | orchestrator | 2026-03-24 02:53:40.297467 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 02:53:40.297553 | orchestrator | Tuesday 24 March 2026 02:53:37 +0000 (0:00:00.273) 0:00:13.220 ********* 2026-03-24 02:53:40.297565 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297576 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297586 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297597 | orchestrator | 2026-03-24 02:53:40.297608 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 02:53:40.297619 | orchestrator | Tuesday 24 March 2026 02:53:38 +0000 (0:00:00.318) 0:00:13.539 ********* 2026-03-24 02:53:40.297630 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297640 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297651 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297661 | orchestrator | 2026-03-24 02:53:40.297672 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 02:53:40.297683 | orchestrator | Tuesday 24 March 2026 02:53:38 +0000 (0:00:00.495) 0:00:14.034 ********* 2026-03-24 02:53:40.297703 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297714 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297734 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297752 | orchestrator | 2026-03-24 02:53:40.297770 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 02:53:40.297788 | orchestrator | Tuesday 24 March 2026 02:53:38 +0000 (0:00:00.326) 0:00:14.361 ********* 2026-03-24 02:53:40.297806 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297825 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297843 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297860 | orchestrator | 2026-03-24 02:53:40.297880 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 02:53:40.297899 | orchestrator | Tuesday 24 March 2026 02:53:39 +0000 (0:00:00.303) 0:00:14.664 ********* 2026-03-24 02:53:40.297917 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297932 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.297943 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.297954 | orchestrator | 2026-03-24 02:53:40.297965 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 02:53:40.297976 | orchestrator | Tuesday 24 March 2026 02:53:39 +0000 (0:00:00.489) 0:00:15.154 ********* 2026-03-24 02:53:40.297987 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.297998 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.298009 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.298083 | orchestrator | 2026-03-24 02:53:40.298097 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 02:53:40.298108 | orchestrator | Tuesday 24 March 2026 02:53:40 +0000 (0:00:00.315) 0:00:15.469 ********* 2026-03-24 02:53:40.298146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.298287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.342422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.342569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.342587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.342614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.342633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.342644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.342662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.342675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.342695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.342712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.342741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.342767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.454993 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.455112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.455131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.455144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.455180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.455214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.455234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.455254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.455267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.455279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.455291 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:40.455302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.455314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.455332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.741456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.741635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.741692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.741704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.741714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.741724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-24 02:53:40.741780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.741822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.741842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.741860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.741876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-24 02:53:40.741895 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:40.741914 | orchestrator | 2026-03-24 02:53:40.741932 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 02:53:40.741952 | orchestrator | Tuesday 24 March 2026 02:53:40 +0000 (0:00:00.545) 0:00:16.014 ********* 2026-03-24 02:53:40.741987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847812 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847866 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847957 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.847980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.848045 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.972788 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.972897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.972915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.972971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.972986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.973020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.973036 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.973050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.973063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.973086 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:40.973106 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.973120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:40.973142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085576 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085681 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085699 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085713 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:41.085727 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.085746 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.217955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218197 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218209 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218221 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218291 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:41.218328 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:53.309980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-24-01-35-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-24 02:53:53.310208 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.310230 | orchestrator | 2026-03-24 02:53:53.310243 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 02:53:53.310254 | orchestrator | Tuesday 24 March 2026 02:53:41 +0000 (0:00:00.577) 0:00:16.592 ********* 2026-03-24 02:53:53.310264 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:53.310275 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:53.310285 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:53.310294 | orchestrator | 2026-03-24 02:53:53.310304 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 02:53:53.310314 | orchestrator | Tuesday 24 March 2026 02:53:42 +0000 (0:00:00.829) 0:00:17.422 ********* 2026-03-24 02:53:53.310323 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:53.310333 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:53.310342 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:53.310352 | orchestrator | 2026-03-24 02:53:53.310361 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 02:53:53.310371 | orchestrator | Tuesday 24 March 2026 02:53:42 +0000 (0:00:00.294) 0:00:17.717 ********* 2026-03-24 02:53:53.310380 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:53.310390 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:53.310400 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:53.310438 | orchestrator | 2026-03-24 02:53:53.310469 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 02:53:53.310480 | orchestrator | Tuesday 24 March 2026 02:53:43 +0000 (0:00:01.473) 0:00:19.191 ********* 2026-03-24 02:53:53.310492 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.310504 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:53.310515 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.310526 | orchestrator | 2026-03-24 02:53:53.310537 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 02:53:53.310548 | orchestrator | Tuesday 24 March 2026 02:53:44 +0000 (0:00:00.299) 0:00:19.490 ********* 2026-03-24 02:53:53.310558 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.310567 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:53.310577 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.310586 | orchestrator | 2026-03-24 02:53:53.310596 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 02:53:53.310605 | orchestrator | Tuesday 24 March 2026 02:53:44 +0000 (0:00:00.654) 0:00:20.145 ********* 2026-03-24 02:53:53.310615 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.310624 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:53.310634 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.310643 | orchestrator | 2026-03-24 02:53:53.310653 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 02:53:53.310662 | orchestrator | Tuesday 24 March 2026 02:53:45 +0000 (0:00:00.311) 0:00:20.456 ********* 2026-03-24 02:53:53.310672 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-24 02:53:53.310682 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-24 02:53:53.310691 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-24 02:53:53.310701 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-24 02:53:53.310711 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-24 02:53:53.310720 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-24 02:53:53.310730 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-24 02:53:53.310747 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-24 02:53:53.310757 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-24 02:53:53.310767 | orchestrator | 2026-03-24 02:53:53.310777 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 02:53:53.310787 | orchestrator | Tuesday 24 March 2026 02:53:46 +0000 (0:00:01.106) 0:00:21.563 ********* 2026-03-24 02:53:53.310796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 02:53:53.310807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 02:53:53.310816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 02:53:53.310826 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.310835 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 02:53:53.310845 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 02:53:53.310854 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 02:53:53.310864 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:53.310874 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 02:53:53.310883 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 02:53:53.310893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 02:53:53.310902 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.310912 | orchestrator | 2026-03-24 02:53:53.310922 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 02:53:53.310932 | orchestrator | Tuesday 24 March 2026 02:53:46 +0000 (0:00:00.403) 0:00:21.966 ********* 2026-03-24 02:53:53.310961 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 02:53:53.310972 | orchestrator | 2026-03-24 02:53:53.310984 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 02:53:53.311003 | orchestrator | Tuesday 24 March 2026 02:53:47 +0000 (0:00:00.692) 0:00:22.659 ********* 2026-03-24 02:53:53.311019 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.311035 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:53.311050 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.311066 | orchestrator | 2026-03-24 02:53:53.311082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 02:53:53.311099 | orchestrator | Tuesday 24 March 2026 02:53:47 +0000 (0:00:00.305) 0:00:22.964 ********* 2026-03-24 02:53:53.311116 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.311132 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:53.311148 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.311165 | orchestrator | 2026-03-24 02:53:53.311212 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 02:53:53.311223 | orchestrator | Tuesday 24 March 2026 02:53:47 +0000 (0:00:00.282) 0:00:23.247 ********* 2026-03-24 02:53:53.311233 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.311243 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:53:53.311253 | orchestrator | skipping: [testbed-node-5] 2026-03-24 02:53:53.311262 | orchestrator | 2026-03-24 02:53:53.311272 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 02:53:53.311282 | orchestrator | Tuesday 24 March 2026 02:53:48 +0000 (0:00:00.489) 0:00:23.736 ********* 2026-03-24 02:53:53.311291 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:53.311301 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:53.311311 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:53.311320 | orchestrator | 2026-03-24 02:53:53.311330 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 02:53:53.311339 | orchestrator | Tuesday 24 March 2026 02:53:48 +0000 (0:00:00.390) 0:00:24.126 ********* 2026-03-24 02:53:53.311349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:53:53.311368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:53:53.311385 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:53:53.311395 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.311404 | orchestrator | 2026-03-24 02:53:53.311439 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 02:53:53.311456 | orchestrator | Tuesday 24 March 2026 02:53:49 +0000 (0:00:00.381) 0:00:24.508 ********* 2026-03-24 02:53:53.311473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:53:53.311489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:53:53.311506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:53:53.311516 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.311526 | orchestrator | 2026-03-24 02:53:53.311536 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 02:53:53.311545 | orchestrator | Tuesday 24 March 2026 02:53:49 +0000 (0:00:00.373) 0:00:24.882 ********* 2026-03-24 02:53:53.311555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 02:53:53.311564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 02:53:53.311574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 02:53:53.311583 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:53:53.311593 | orchestrator | 2026-03-24 02:53:53.311603 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 02:53:53.311612 | orchestrator | Tuesday 24 March 2026 02:53:49 +0000 (0:00:00.349) 0:00:25.231 ********* 2026-03-24 02:53:53.311622 | orchestrator | ok: [testbed-node-3] 2026-03-24 02:53:53.311631 | orchestrator | ok: [testbed-node-4] 2026-03-24 02:53:53.311641 | orchestrator | ok: [testbed-node-5] 2026-03-24 02:53:53.311651 | orchestrator | 2026-03-24 02:53:53.311660 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 02:53:53.311670 | orchestrator | Tuesday 24 March 2026 02:53:50 +0000 (0:00:00.327) 0:00:25.559 ********* 2026-03-24 02:53:53.311680 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 02:53:53.311689 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 02:53:53.311699 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 02:53:53.311708 | orchestrator | 2026-03-24 02:53:53.311718 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 02:53:53.311727 | orchestrator | Tuesday 24 March 2026 02:53:50 +0000 (0:00:00.735) 0:00:26.294 ********* 2026-03-24 02:53:53.311737 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:53:53.311747 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:53:53.311756 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:53:53.311772 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 02:53:53.311799 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 02:53:53.311816 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 02:53:53.311832 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 02:53:53.311847 | orchestrator | 2026-03-24 02:53:53.311862 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 02:53:53.311878 | orchestrator | Tuesday 24 March 2026 02:53:51 +0000 (0:00:00.808) 0:00:27.103 ********* 2026-03-24 02:53:53.311894 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 02:53:53.311922 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 02:55:33.006277 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 02:55:33.006378 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 02:55:33.006408 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 02:55:33.006414 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 02:55:33.006419 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 02:55:33.006425 | orchestrator | 2026-03-24 02:55:33.006431 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-24 02:55:33.006436 | orchestrator | Tuesday 24 March 2026 02:53:53 +0000 (0:00:01.579) 0:00:28.682 ********* 2026-03-24 02:55:33.006440 | orchestrator | skipping: [testbed-node-3] 2026-03-24 02:55:33.006445 | orchestrator | skipping: [testbed-node-4] 2026-03-24 02:55:33.006449 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-24 02:55:33.006453 | orchestrator | 2026-03-24 02:55:33.006458 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-24 02:55:33.006463 | orchestrator | Tuesday 24 March 2026 02:53:53 +0000 (0:00:00.380) 0:00:29.062 ********* 2026-03-24 02:55:33.006472 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-24 02:55:33.006481 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-24 02:55:33.006519 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-24 02:55:33.006527 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-24 02:55:33.006533 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-24 02:55:33.006539 | orchestrator | 2026-03-24 02:55:33.006545 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-24 02:55:33.006550 | orchestrator | Tuesday 24 March 2026 02:54:40 +0000 (0:00:46.443) 0:01:15.506 ********* 2026-03-24 02:55:33.006556 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006562 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006568 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006574 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006580 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006586 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006592 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-24 02:55:33.006597 | orchestrator | 2026-03-24 02:55:33.006603 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-24 02:55:33.006609 | orchestrator | Tuesday 24 March 2026 02:55:03 +0000 (0:00:23.763) 0:01:39.269 ********* 2026-03-24 02:55:33.006615 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006665 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006672 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006678 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006684 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006689 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006696 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 02:55:33.006702 | orchestrator | 2026-03-24 02:55:33.006708 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-24 02:55:33.006714 | orchestrator | Tuesday 24 March 2026 02:55:15 +0000 (0:00:11.627) 0:01:50.896 ********* 2026-03-24 02:55:33.006719 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006743 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:55:33.006749 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:55:33.006755 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006761 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:55:33.006767 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:55:33.006773 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006779 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:55:33.006785 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:55:33.006790 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006797 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:55:33.006803 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:55:33.006809 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006814 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:55:33.006819 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:55:33.006825 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 02:55:33.006831 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 02:55:33.006837 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 02:55:33.006843 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-24 02:55:33.006848 | orchestrator | 2026-03-24 02:55:33.006854 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:55:33.006868 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-24 02:55:33.006876 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-24 02:55:33.006883 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-24 02:55:33.006888 | orchestrator | 2026-03-24 02:55:33.006895 | orchestrator | 2026-03-24 02:55:33.006901 | orchestrator | 2026-03-24 02:55:33.006907 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:55:33.006913 | orchestrator | Tuesday 24 March 2026 02:55:32 +0000 (0:00:17.158) 0:02:08.055 ********* 2026-03-24 02:55:33.006919 | orchestrator | =============================================================================== 2026-03-24 02:55:33.006932 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.44s 2026-03-24 02:55:33.006939 | orchestrator | generate keys ---------------------------------------------------------- 23.76s 2026-03-24 02:55:33.006944 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.16s 2026-03-24 02:55:33.006951 | orchestrator | get keys from monitors ------------------------------------------------- 11.63s 2026-03-24 02:55:33.006957 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.15s 2026-03-24 02:55:33.006964 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.74s 2026-03-24 02:55:33.006970 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.58s 2026-03-24 02:55:33.006976 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.47s 2026-03-24 02:55:33.006982 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.11s 2026-03-24 02:55:33.006988 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.95s 2026-03-24 02:55:33.006994 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.83s 2026-03-24 02:55:33.007059 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.81s 2026-03-24 02:55:33.007066 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.77s 2026-03-24 02:55:33.007072 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.74s 2026-03-24 02:55:33.007079 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.73s 2026-03-24 02:55:33.007086 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.69s 2026-03-24 02:55:33.007092 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-03-24 02:55:33.007098 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2026-03-24 02:55:33.007105 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.60s 2026-03-24 02:55:33.007112 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.58s 2026-03-24 02:55:35.255299 | orchestrator | 2026-03-24 02:55:35 | INFO  | Task 167f1be9-8c94-44ff-bfec-4ddca68bdbbf (copy-ceph-keys) was prepared for execution. 2026-03-24 02:55:35.255436 | orchestrator | 2026-03-24 02:55:35 | INFO  | It takes a moment until task 167f1be9-8c94-44ff-bfec-4ddca68bdbbf (copy-ceph-keys) has been started and output is visible here. 2026-03-24 02:56:11.884759 | orchestrator | 2026-03-24 02:56:11.885002 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-24 02:56:11.885036 | orchestrator | 2026-03-24 02:56:11.885057 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-24 02:56:11.885076 | orchestrator | Tuesday 24 March 2026 02:55:39 +0000 (0:00:00.155) 0:00:00.155 ********* 2026-03-24 02:56:11.885093 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-24 02:56:11.885111 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885127 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885144 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-24 02:56:11.885160 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885175 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-24 02:56:11.885190 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-24 02:56:11.885206 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-24 02:56:11.885256 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-24 02:56:11.885274 | orchestrator | 2026-03-24 02:56:11.885292 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-24 02:56:11.885309 | orchestrator | Tuesday 24 March 2026 02:55:44 +0000 (0:00:04.615) 0:00:04.770 ********* 2026-03-24 02:56:11.885327 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-24 02:56:11.885363 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885382 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885399 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-24 02:56:11.885417 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885434 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-24 02:56:11.885453 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-24 02:56:11.885466 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-24 02:56:11.885477 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-24 02:56:11.885489 | orchestrator | 2026-03-24 02:56:11.885500 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-24 02:56:11.885511 | orchestrator | Tuesday 24 March 2026 02:55:48 +0000 (0:00:04.234) 0:00:09.005 ********* 2026-03-24 02:56:11.885523 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-24 02:56:11.885534 | orchestrator | 2026-03-24 02:56:11.885545 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-24 02:56:11.885556 | orchestrator | Tuesday 24 March 2026 02:55:49 +0000 (0:00:00.997) 0:00:10.003 ********* 2026-03-24 02:56:11.885568 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-24 02:56:11.885579 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885591 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885602 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-24 02:56:11.885614 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885630 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-24 02:56:11.885646 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-24 02:56:11.885661 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-24 02:56:11.885677 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-24 02:56:11.885694 | orchestrator | 2026-03-24 02:56:11.885710 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-24 02:56:11.885727 | orchestrator | Tuesday 24 March 2026 02:56:02 +0000 (0:00:12.766) 0:00:22.769 ********* 2026-03-24 02:56:11.885743 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-24 02:56:11.885760 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-24 02:56:11.885778 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-24 02:56:11.885794 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-24 02:56:11.885833 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-24 02:56:11.885879 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-24 02:56:11.885895 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-24 02:56:11.885905 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-24 02:56:11.885915 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-24 02:56:11.885924 | orchestrator | 2026-03-24 02:56:11.885934 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-24 02:56:11.885944 | orchestrator | Tuesday 24 March 2026 02:56:04 +0000 (0:00:02.897) 0:00:25.666 ********* 2026-03-24 02:56:11.885954 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-24 02:56:11.885964 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885973 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.885983 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-24 02:56:11.885992 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-24 02:56:11.886002 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-24 02:56:11.886011 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-24 02:56:11.886079 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-24 02:56:11.886093 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-24 02:56:11.886111 | orchestrator | 2026-03-24 02:56:11.886169 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:56:11.886200 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:56:11.886217 | orchestrator | 2026-03-24 02:56:11.886227 | orchestrator | 2026-03-24 02:56:11.886237 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:56:11.886246 | orchestrator | Tuesday 24 March 2026 02:56:11 +0000 (0:00:06.701) 0:00:32.368 ********* 2026-03-24 02:56:11.886256 | orchestrator | =============================================================================== 2026-03-24 02:56:11.886265 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.77s 2026-03-24 02:56:11.886275 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.70s 2026-03-24 02:56:11.886285 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.62s 2026-03-24 02:56:11.886294 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.23s 2026-03-24 02:56:11.886303 | orchestrator | Check if target directories exist --------------------------------------- 2.90s 2026-03-24 02:56:11.886313 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2026-03-24 02:56:24.178400 | orchestrator | 2026-03-24 02:56:24 | INFO  | Task 019ef8f6-17e4-4d83-91c3-08591f7859a8 (cephclient) was prepared for execution. 2026-03-24 02:56:24.178511 | orchestrator | 2026-03-24 02:56:24 | INFO  | It takes a moment until task 019ef8f6-17e4-4d83-91c3-08591f7859a8 (cephclient) has been started and output is visible here. 2026-03-24 02:57:23.806895 | orchestrator | 2026-03-24 02:57:23.807045 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-24 02:57:23.807062 | orchestrator | 2026-03-24 02:57:23.807073 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-24 02:57:23.807094 | orchestrator | Tuesday 24 March 2026 02:56:28 +0000 (0:00:00.229) 0:00:00.229 ********* 2026-03-24 02:57:23.807104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-24 02:57:23.807136 | orchestrator | 2026-03-24 02:57:23.807145 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-24 02:57:23.807154 | orchestrator | Tuesday 24 March 2026 02:56:28 +0000 (0:00:00.233) 0:00:00.463 ********* 2026-03-24 02:57:23.807163 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-24 02:57:23.807172 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-24 02:57:23.807182 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-24 02:57:23.807191 | orchestrator | 2026-03-24 02:57:23.807200 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-24 02:57:23.807208 | orchestrator | Tuesday 24 March 2026 02:56:29 +0000 (0:00:01.167) 0:00:01.631 ********* 2026-03-24 02:57:23.807218 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-24 02:57:23.807227 | orchestrator | 2026-03-24 02:57:23.807236 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-24 02:57:23.807244 | orchestrator | Tuesday 24 March 2026 02:56:31 +0000 (0:00:01.371) 0:00:03.002 ********* 2026-03-24 02:57:23.807253 | orchestrator | changed: [testbed-manager] 2026-03-24 02:57:23.807262 | orchestrator | 2026-03-24 02:57:23.807270 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-24 02:57:23.807279 | orchestrator | Tuesday 24 March 2026 02:56:32 +0000 (0:00:00.882) 0:00:03.885 ********* 2026-03-24 02:57:23.807288 | orchestrator | changed: [testbed-manager] 2026-03-24 02:57:23.807296 | orchestrator | 2026-03-24 02:57:23.807305 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-24 02:57:23.807314 | orchestrator | Tuesday 24 March 2026 02:56:32 +0000 (0:00:00.875) 0:00:04.760 ********* 2026-03-24 02:57:23.807322 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-24 02:57:23.807331 | orchestrator | ok: [testbed-manager] 2026-03-24 02:57:23.807340 | orchestrator | 2026-03-24 02:57:23.807348 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-24 02:57:23.807357 | orchestrator | Tuesday 24 March 2026 02:57:14 +0000 (0:00:41.235) 0:00:45.996 ********* 2026-03-24 02:57:23.807366 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-24 02:57:23.807375 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-24 02:57:23.807384 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-24 02:57:23.807392 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-24 02:57:23.807401 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-24 02:57:23.807410 | orchestrator | 2026-03-24 02:57:23.807420 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-24 02:57:23.807431 | orchestrator | Tuesday 24 March 2026 02:57:18 +0000 (0:00:03.998) 0:00:49.994 ********* 2026-03-24 02:57:23.807441 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-24 02:57:23.807451 | orchestrator | 2026-03-24 02:57:23.807461 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-24 02:57:23.807471 | orchestrator | Tuesday 24 March 2026 02:57:18 +0000 (0:00:00.475) 0:00:50.470 ********* 2026-03-24 02:57:23.807481 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:57:23.807491 | orchestrator | 2026-03-24 02:57:23.807501 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-24 02:57:23.807511 | orchestrator | Tuesday 24 March 2026 02:57:18 +0000 (0:00:00.135) 0:00:50.606 ********* 2026-03-24 02:57:23.807520 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:57:23.807530 | orchestrator | 2026-03-24 02:57:23.807540 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-24 02:57:23.807550 | orchestrator | Tuesday 24 March 2026 02:57:19 +0000 (0:00:00.583) 0:00:51.190 ********* 2026-03-24 02:57:23.807574 | orchestrator | changed: [testbed-manager] 2026-03-24 02:57:23.807585 | orchestrator | 2026-03-24 02:57:23.807595 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-24 02:57:23.807619 | orchestrator | Tuesday 24 March 2026 02:57:20 +0000 (0:00:01.425) 0:00:52.615 ********* 2026-03-24 02:57:23.807693 | orchestrator | changed: [testbed-manager] 2026-03-24 02:57:23.807711 | orchestrator | 2026-03-24 02:57:23.807727 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-24 02:57:23.807739 | orchestrator | Tuesday 24 March 2026 02:57:21 +0000 (0:00:00.699) 0:00:53.315 ********* 2026-03-24 02:57:23.807748 | orchestrator | changed: [testbed-manager] 2026-03-24 02:57:23.807758 | orchestrator | 2026-03-24 02:57:23.807768 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-24 02:57:23.807778 | orchestrator | Tuesday 24 March 2026 02:57:22 +0000 (0:00:00.586) 0:00:53.901 ********* 2026-03-24 02:57:23.807786 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-24 02:57:23.807796 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-24 02:57:23.807811 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-24 02:57:23.807826 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-24 02:57:23.807840 | orchestrator | 2026-03-24 02:57:23.807855 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:57:23.807870 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 02:57:23.807885 | orchestrator | 2026-03-24 02:57:23.807899 | orchestrator | 2026-03-24 02:57:23.807935 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:57:23.807950 | orchestrator | Tuesday 24 March 2026 02:57:23 +0000 (0:00:01.414) 0:00:55.316 ********* 2026-03-24 02:57:23.807965 | orchestrator | =============================================================================== 2026-03-24 02:57:23.807981 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.24s 2026-03-24 02:57:23.807995 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.00s 2026-03-24 02:57:23.808011 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.43s 2026-03-24 02:57:23.808026 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.42s 2026-03-24 02:57:23.808041 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.37s 2026-03-24 02:57:23.808055 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2026-03-24 02:57:23.808069 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.88s 2026-03-24 02:57:23.808085 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2026-03-24 02:57:23.808099 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.70s 2026-03-24 02:57:23.808115 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.59s 2026-03-24 02:57:23.808129 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2026-03-24 02:57:23.808144 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-03-24 02:57:23.808160 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-03-24 02:57:23.808169 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-03-24 02:57:26.041991 | orchestrator | 2026-03-24 02:57:26 | INFO  | Task b41b0487-929a-4234-b8f9-2b7ce28c6b9e (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-24 02:57:26.042213 | orchestrator | 2026-03-24 02:57:26 | INFO  | It takes a moment until task b41b0487-929a-4234-b8f9-2b7ce28c6b9e (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-24 02:59:00.472916 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-24 02:59:00.473005 | orchestrator | 2.16.14 2026-03-24 02:59:00.473013 | orchestrator | 2026-03-24 02:59:00.473019 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-24 02:59:00.473024 | orchestrator | 2026-03-24 02:59:00.473029 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-24 02:59:00.473050 | orchestrator | Tuesday 24 March 2026 02:57:29 +0000 (0:00:00.197) 0:00:00.197 ********* 2026-03-24 02:59:00.473055 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473061 | orchestrator | 2026-03-24 02:59:00.473066 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-24 02:59:00.473070 | orchestrator | Tuesday 24 March 2026 02:57:31 +0000 (0:00:01.322) 0:00:01.520 ********* 2026-03-24 02:59:00.473075 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473080 | orchestrator | 2026-03-24 02:59:00.473084 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-24 02:59:00.473089 | orchestrator | Tuesday 24 March 2026 02:57:32 +0000 (0:00:00.972) 0:00:02.492 ********* 2026-03-24 02:59:00.473094 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473098 | orchestrator | 2026-03-24 02:59:00.473103 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-24 02:59:00.473107 | orchestrator | Tuesday 24 March 2026 02:57:33 +0000 (0:00:00.957) 0:00:03.450 ********* 2026-03-24 02:59:00.473112 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473116 | orchestrator | 2026-03-24 02:59:00.473121 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-24 02:59:00.473125 | orchestrator | Tuesday 24 March 2026 02:57:34 +0000 (0:00:01.048) 0:00:04.498 ********* 2026-03-24 02:59:00.473130 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473134 | orchestrator | 2026-03-24 02:59:00.473139 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-24 02:59:00.473143 | orchestrator | Tuesday 24 March 2026 02:57:35 +0000 (0:00:01.054) 0:00:05.552 ********* 2026-03-24 02:59:00.473157 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473162 | orchestrator | 2026-03-24 02:59:00.473167 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-24 02:59:00.473171 | orchestrator | Tuesday 24 March 2026 02:57:36 +0000 (0:00:00.984) 0:00:06.537 ********* 2026-03-24 02:59:00.473176 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473180 | orchestrator | 2026-03-24 02:59:00.473185 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-24 02:59:00.473189 | orchestrator | Tuesday 24 March 2026 02:57:37 +0000 (0:00:01.078) 0:00:07.615 ********* 2026-03-24 02:59:00.473193 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473198 | orchestrator | 2026-03-24 02:59:00.473202 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-24 02:59:00.473207 | orchestrator | Tuesday 24 March 2026 02:57:38 +0000 (0:00:01.222) 0:00:08.838 ********* 2026-03-24 02:59:00.473211 | orchestrator | changed: [testbed-manager] 2026-03-24 02:59:00.473216 | orchestrator | 2026-03-24 02:59:00.473220 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-24 02:59:00.473225 | orchestrator | Tuesday 24 March 2026 02:58:35 +0000 (0:00:57.190) 0:01:06.029 ********* 2026-03-24 02:59:00.473229 | orchestrator | skipping: [testbed-manager] 2026-03-24 02:59:00.473234 | orchestrator | 2026-03-24 02:59:00.473238 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-24 02:59:00.473243 | orchestrator | 2026-03-24 02:59:00.473247 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-24 02:59:00.473252 | orchestrator | Tuesday 24 March 2026 02:58:35 +0000 (0:00:00.185) 0:01:06.215 ********* 2026-03-24 02:59:00.473259 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:59:00.473266 | orchestrator | 2026-03-24 02:59:00.473274 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-24 02:59:00.473282 | orchestrator | 2026-03-24 02:59:00.473290 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-24 02:59:00.473298 | orchestrator | Tuesday 24 March 2026 02:58:37 +0000 (0:00:01.727) 0:01:07.942 ********* 2026-03-24 02:59:00.473305 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:59:00.473313 | orchestrator | 2026-03-24 02:59:00.473320 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-24 02:59:00.473335 | orchestrator | 2026-03-24 02:59:00.473343 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-24 02:59:00.473349 | orchestrator | Tuesday 24 March 2026 02:58:48 +0000 (0:00:11.248) 0:01:19.191 ********* 2026-03-24 02:59:00.473354 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:59:00.473358 | orchestrator | 2026-03-24 02:59:00.473363 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 02:59:00.473368 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 02:59:00.473375 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:59:00.473407 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:59:00.473412 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 02:59:00.473417 | orchestrator | 2026-03-24 02:59:00.473421 | orchestrator | 2026-03-24 02:59:00.473426 | orchestrator | 2026-03-24 02:59:00.473430 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 02:59:00.473435 | orchestrator | Tuesday 24 March 2026 02:59:00 +0000 (0:00:11.242) 0:01:30.433 ********* 2026-03-24 02:59:00.473439 | orchestrator | =============================================================================== 2026-03-24 02:59:00.473444 | orchestrator | Create admin user ------------------------------------------------------ 57.19s 2026-03-24 02:59:00.473460 | orchestrator | Restart ceph manager service ------------------------------------------- 24.22s 2026-03-24 02:59:00.473465 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.32s 2026-03-24 02:59:00.473470 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.22s 2026-03-24 02:59:00.473474 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.08s 2026-03-24 02:59:00.473479 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.05s 2026-03-24 02:59:00.473483 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.05s 2026-03-24 02:59:00.473488 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.98s 2026-03-24 02:59:00.473492 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.97s 2026-03-24 02:59:00.473497 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.96s 2026-03-24 02:59:00.473501 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2026-03-24 02:59:00.762480 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-24 02:59:02.709316 | orchestrator | 2026-03-24 02:59:02 | INFO  | Task 0875eb70-30b6-4ba5-883f-092466bee897 (keystone) was prepared for execution. 2026-03-24 02:59:02.710438 | orchestrator | 2026-03-24 02:59:02 | INFO  | It takes a moment until task 0875eb70-30b6-4ba5-883f-092466bee897 (keystone) has been started and output is visible here. 2026-03-24 02:59:09.941836 | orchestrator | 2026-03-24 02:59:09.941916 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 02:59:09.941923 | orchestrator | 2026-03-24 02:59:09.941928 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 02:59:09.941945 | orchestrator | Tuesday 24 March 2026 02:59:06 +0000 (0:00:00.252) 0:00:00.252 ********* 2026-03-24 02:59:09.941950 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:59:09.941957 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:59:09.941961 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:59:09.941966 | orchestrator | 2026-03-24 02:59:09.941970 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 02:59:09.941975 | orchestrator | Tuesday 24 March 2026 02:59:07 +0000 (0:00:00.308) 0:00:00.561 ********* 2026-03-24 02:59:09.941993 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-24 02:59:09.941998 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-24 02:59:09.942002 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-24 02:59:09.942007 | orchestrator | 2026-03-24 02:59:09.942011 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-24 02:59:09.942037 | orchestrator | 2026-03-24 02:59:09.942041 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-24 02:59:09.942046 | orchestrator | Tuesday 24 March 2026 02:59:07 +0000 (0:00:00.431) 0:00:00.992 ********* 2026-03-24 02:59:09.942051 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:59:09.942056 | orchestrator | 2026-03-24 02:59:09.942061 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-24 02:59:09.942065 | orchestrator | Tuesday 24 March 2026 02:59:08 +0000 (0:00:00.597) 0:00:01.590 ********* 2026-03-24 02:59:09.942074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:09.942081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:09.942102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:09.942112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:09.942119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:09.942123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:09.942128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:09.942132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:09.942137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:09.942145 | orchestrator | 2026-03-24 02:59:09.942150 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-24 02:59:09.942157 | orchestrator | Tuesday 24 March 2026 02:59:09 +0000 (0:00:01.867) 0:00:03.458 ********* 2026-03-24 02:59:16.075209 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:16.075308 | orchestrator | 2026-03-24 02:59:16.075324 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-24 02:59:16.075391 | orchestrator | Tuesday 24 March 2026 02:59:10 +0000 (0:00:00.274) 0:00:03.732 ********* 2026-03-24 02:59:16.075403 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:16.075413 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:16.075422 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:16.075431 | orchestrator | 2026-03-24 02:59:16.075441 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-24 02:59:16.075450 | orchestrator | Tuesday 24 March 2026 02:59:10 +0000 (0:00:00.308) 0:00:04.041 ********* 2026-03-24 02:59:16.075460 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:59:16.075469 | orchestrator | 2026-03-24 02:59:16.075477 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-24 02:59:16.075487 | orchestrator | Tuesday 24 March 2026 02:59:11 +0000 (0:00:00.826) 0:00:04.867 ********* 2026-03-24 02:59:16.075497 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 02:59:16.075507 | orchestrator | 2026-03-24 02:59:16.075516 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-24 02:59:16.075525 | orchestrator | Tuesday 24 March 2026 02:59:11 +0000 (0:00:00.532) 0:00:05.400 ********* 2026-03-24 02:59:16.075539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:16.075553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:16.075564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:16.075619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:16.075633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:16.075643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:16.075653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:16.075662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:16.075678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:16.075687 | orchestrator | 2026-03-24 02:59:16.075696 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-24 02:59:16.075706 | orchestrator | Tuesday 24 March 2026 02:59:15 +0000 (0:00:03.640) 0:00:09.040 ********* 2026-03-24 02:59:16.075723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:16.871007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:16.871130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:16.871152 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:16.871175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:16.871232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:16.871264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:16.871284 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:16.871329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:16.871381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:16.871394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:16.871414 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:16.871426 | orchestrator | 2026-03-24 02:59:16.871451 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-24 02:59:16.871465 | orchestrator | Tuesday 24 March 2026 02:59:16 +0000 (0:00:00.555) 0:00:09.596 ********* 2026-03-24 02:59:16.871490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:16.871522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:16.871546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:20.506792 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:20.506894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:20.506912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:20.506944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:20.506953 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:20.506975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:20.506985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:20.507008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:20.507017 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:20.507025 | orchestrator | 2026-03-24 02:59:20.507034 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-24 02:59:20.507043 | orchestrator | Tuesday 24 March 2026 02:59:16 +0000 (0:00:00.794) 0:00:10.391 ********* 2026-03-24 02:59:20.507052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:20.507067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:20.507172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:20.507210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:25.188813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:25.188912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 02:59:25.188921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:25.188928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:25.188945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:25.188952 | orchestrator | 2026-03-24 02:59:25.188971 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-24 02:59:25.188978 | orchestrator | Tuesday 24 March 2026 02:59:20 +0000 (0:00:03.636) 0:00:14.027 ********* 2026-03-24 02:59:25.189005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:25.189014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:25.189027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:25.189034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:25.189044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:25.189056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:28.776086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:28.776209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:28.776221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 02:59:28.776231 | orchestrator | 2026-03-24 02:59:28.776241 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-24 02:59:28.776251 | orchestrator | Tuesday 24 March 2026 02:59:25 +0000 (0:00:04.682) 0:00:18.709 ********* 2026-03-24 02:59:28.776259 | orchestrator | changed: [testbed-node-1] 2026-03-24 02:59:28.776268 | orchestrator | changed: [testbed-node-0] 2026-03-24 02:59:28.776276 | orchestrator | changed: [testbed-node-2] 2026-03-24 02:59:28.776283 | orchestrator | 2026-03-24 02:59:28.776292 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-24 02:59:28.776300 | orchestrator | Tuesday 24 March 2026 02:59:26 +0000 (0:00:01.496) 0:00:20.206 ********* 2026-03-24 02:59:28.776307 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:28.776343 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:28.776351 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:28.776359 | orchestrator | 2026-03-24 02:59:28.776367 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-24 02:59:28.776375 | orchestrator | Tuesday 24 March 2026 02:59:27 +0000 (0:00:00.764) 0:00:20.971 ********* 2026-03-24 02:59:28.776382 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:28.776390 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:28.776398 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:28.776405 | orchestrator | 2026-03-24 02:59:28.776426 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-24 02:59:28.776435 | orchestrator | Tuesday 24 March 2026 02:59:27 +0000 (0:00:00.478) 0:00:21.450 ********* 2026-03-24 02:59:28.776443 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:28.776450 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:28.776458 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:28.776466 | orchestrator | 2026-03-24 02:59:28.776474 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-24 02:59:28.776482 | orchestrator | Tuesday 24 March 2026 02:59:28 +0000 (0:00:00.297) 0:00:21.748 ********* 2026-03-24 02:59:28.776509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:28.776526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:28.776536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:28.776544 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:28.776565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:28.776579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:28.776588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:28.776605 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:28.776621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-24 02:59:47.095543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 02:59:47.095650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 02:59:47.095663 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:47.095672 | orchestrator | 2026-03-24 02:59:47.095680 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-24 02:59:47.095689 | orchestrator | Tuesday 24 March 2026 02:59:28 +0000 (0:00:00.546) 0:00:22.294 ********* 2026-03-24 02:59:47.095696 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:47.095703 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:47.095710 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:47.095716 | orchestrator | 2026-03-24 02:59:47.095723 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-24 02:59:47.095730 | orchestrator | Tuesday 24 March 2026 02:59:29 +0000 (0:00:00.310) 0:00:22.605 ********* 2026-03-24 02:59:47.095737 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-24 02:59:47.095745 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-24 02:59:47.095773 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-24 02:59:47.095781 | orchestrator | 2026-03-24 02:59:47.095800 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-24 02:59:47.095807 | orchestrator | Tuesday 24 March 2026 02:59:30 +0000 (0:00:01.807) 0:00:24.412 ********* 2026-03-24 02:59:47.095814 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:59:47.095820 | orchestrator | 2026-03-24 02:59:47.095827 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-24 02:59:47.095833 | orchestrator | Tuesday 24 March 2026 02:59:31 +0000 (0:00:00.900) 0:00:25.313 ********* 2026-03-24 02:59:47.095840 | orchestrator | skipping: [testbed-node-0] 2026-03-24 02:59:47.095847 | orchestrator | skipping: [testbed-node-1] 2026-03-24 02:59:47.095853 | orchestrator | skipping: [testbed-node-2] 2026-03-24 02:59:47.095860 | orchestrator | 2026-03-24 02:59:47.095866 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-24 02:59:47.095873 | orchestrator | Tuesday 24 March 2026 02:59:32 +0000 (0:00:00.538) 0:00:25.851 ********* 2026-03-24 02:59:47.095880 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 02:59:47.095887 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 02:59:47.095893 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 02:59:47.095900 | orchestrator | 2026-03-24 02:59:47.095907 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-24 02:59:47.095914 | orchestrator | Tuesday 24 March 2026 02:59:33 +0000 (0:00:00.952) 0:00:26.804 ********* 2026-03-24 02:59:47.095921 | orchestrator | ok: [testbed-node-0] 2026-03-24 02:59:47.095929 | orchestrator | ok: [testbed-node-1] 2026-03-24 02:59:47.095936 | orchestrator | ok: [testbed-node-2] 2026-03-24 02:59:47.095942 | orchestrator | 2026-03-24 02:59:47.095949 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-24 02:59:47.095956 | orchestrator | Tuesday 24 March 2026 02:59:33 +0000 (0:00:00.475) 0:00:27.279 ********* 2026-03-24 02:59:47.095963 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-24 02:59:47.095970 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-24 02:59:47.095977 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-24 02:59:47.095983 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-24 02:59:47.095990 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-24 02:59:47.095997 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-24 02:59:47.096004 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-24 02:59:47.096011 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-24 02:59:47.096032 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-24 02:59:47.096039 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-24 02:59:47.096046 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-24 02:59:47.096053 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-24 02:59:47.096060 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-24 02:59:47.096066 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-24 02:59:47.096073 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-24 02:59:47.096080 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-24 02:59:47.096094 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-24 02:59:47.096102 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-24 02:59:47.096110 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-24 02:59:47.096118 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-24 02:59:47.096126 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-24 02:59:47.096135 | orchestrator | 2026-03-24 02:59:47.096143 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-24 02:59:47.096150 | orchestrator | Tuesday 24 March 2026 02:59:42 +0000 (0:00:08.644) 0:00:35.924 ********* 2026-03-24 02:59:47.096158 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-24 02:59:47.096166 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-24 02:59:47.096174 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-24 02:59:47.096182 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-24 02:59:47.096190 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-24 02:59:47.096198 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-24 02:59:47.096206 | orchestrator | 2026-03-24 02:59:47.096214 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-24 02:59:47.096225 | orchestrator | Tuesday 24 March 2026 02:59:44 +0000 (0:00:02.525) 0:00:38.450 ********* 2026-03-24 02:59:47.096236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 02:59:47.096252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 03:01:19.382594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-24 03:01:19.382720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 03:01:19.382747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 03:01:19.382757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-24 03:01:19.382781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 03:01:19.382814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 03:01:19.382830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-24 03:01:19.382839 | orchestrator | 2026-03-24 03:01:19.382849 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-24 03:01:19.382859 | orchestrator | Tuesday 24 March 2026 02:59:47 +0000 (0:00:02.162) 0:00:40.613 ********* 2026-03-24 03:01:19.382867 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:01:19.382876 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:01:19.382884 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:01:19.382892 | orchestrator | 2026-03-24 03:01:19.382900 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-24 03:01:19.382908 | orchestrator | Tuesday 24 March 2026 02:59:47 +0000 (0:00:00.504) 0:00:41.117 ********* 2026-03-24 03:01:19.382916 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:01:19.382924 | orchestrator | 2026-03-24 03:01:19.382932 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-24 03:01:19.382940 | orchestrator | Tuesday 24 March 2026 02:59:50 +0000 (0:00:02.454) 0:00:43.571 ********* 2026-03-24 03:01:19.382948 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:01:19.382956 | orchestrator | 2026-03-24 03:01:19.382964 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-24 03:01:19.382972 | orchestrator | Tuesday 24 March 2026 02:59:52 +0000 (0:00:02.322) 0:00:45.894 ********* 2026-03-24 03:01:19.382980 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:01:19.382988 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:01:19.382996 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:01:19.383004 | orchestrator | 2026-03-24 03:01:19.383012 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-24 03:01:19.383020 | orchestrator | Tuesday 24 March 2026 02:59:53 +0000 (0:00:00.880) 0:00:46.774 ********* 2026-03-24 03:01:19.383028 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:01:19.383036 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:01:19.383044 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:01:19.383052 | orchestrator | 2026-03-24 03:01:19.383060 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-24 03:01:19.383074 | orchestrator | Tuesday 24 March 2026 02:59:53 +0000 (0:00:00.319) 0:00:47.094 ********* 2026-03-24 03:01:19.383082 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:01:19.383153 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:01:19.383164 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:01:19.383174 | orchestrator | 2026-03-24 03:01:19.383183 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-24 03:01:19.383193 | orchestrator | Tuesday 24 March 2026 02:59:54 +0000 (0:00:00.538) 0:00:47.632 ********* 2026-03-24 03:01:19.383202 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:01:19.383211 | orchestrator | 2026-03-24 03:01:19.383220 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-24 03:01:19.383229 | orchestrator | Tuesday 24 March 2026 03:00:08 +0000 (0:00:14.300) 0:01:01.933 ********* 2026-03-24 03:01:19.383239 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:01:19.383248 | orchestrator | 2026-03-24 03:01:19.383258 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-24 03:01:19.383267 | orchestrator | Tuesday 24 March 2026 03:00:19 +0000 (0:00:11.383) 0:01:13.316 ********* 2026-03-24 03:01:19.383282 | orchestrator | 2026-03-24 03:01:19.383292 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-24 03:01:19.383301 | orchestrator | Tuesday 24 March 2026 03:00:19 +0000 (0:00:00.064) 0:01:13.380 ********* 2026-03-24 03:01:19.383310 | orchestrator | 2026-03-24 03:01:19.383319 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-24 03:01:19.383328 | orchestrator | Tuesday 24 March 2026 03:00:19 +0000 (0:00:00.067) 0:01:13.448 ********* 2026-03-24 03:01:19.383337 | orchestrator | 2026-03-24 03:01:19.383347 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-24 03:01:19.383356 | orchestrator | Tuesday 24 March 2026 03:00:19 +0000 (0:00:00.068) 0:01:13.516 ********* 2026-03-24 03:01:19.383365 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:01:19.383375 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:01:19.383384 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:01:19.383393 | orchestrator | 2026-03-24 03:01:19.383403 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-24 03:01:19.383411 | orchestrator | Tuesday 24 March 2026 03:01:03 +0000 (0:00:43.127) 0:01:56.643 ********* 2026-03-24 03:01:19.383419 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:01:19.383426 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:01:19.383434 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:01:19.383442 | orchestrator | 2026-03-24 03:01:19.383450 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-24 03:01:19.383458 | orchestrator | Tuesday 24 March 2026 03:01:12 +0000 (0:00:09.655) 0:02:06.299 ********* 2026-03-24 03:01:19.383465 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:01:19.383473 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:01:19.383486 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:01:19.383499 | orchestrator | 2026-03-24 03:01:19.383513 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-24 03:01:19.383524 | orchestrator | Tuesday 24 March 2026 03:01:18 +0000 (0:00:06.001) 0:02:12.301 ********* 2026-03-24 03:01:19.383545 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:02:13.713824 | orchestrator | 2026-03-24 03:02:13.714199 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-24 03:02:13.714239 | orchestrator | Tuesday 24 March 2026 03:01:19 +0000 (0:00:00.599) 0:02:12.901 ********* 2026-03-24 03:02:13.714252 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:02:13.714266 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:02:13.714278 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:02:13.714289 | orchestrator | 2026-03-24 03:02:13.714300 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-24 03:02:13.714312 | orchestrator | Tuesday 24 March 2026 03:01:20 +0000 (0:00:01.219) 0:02:14.120 ********* 2026-03-24 03:02:13.714323 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:02:13.714337 | orchestrator | 2026-03-24 03:02:13.714350 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-24 03:02:13.714362 | orchestrator | Tuesday 24 March 2026 03:01:22 +0000 (0:00:01.650) 0:02:15.771 ********* 2026-03-24 03:02:13.714375 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-24 03:02:13.714388 | orchestrator | 2026-03-24 03:02:13.714401 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-24 03:02:13.714413 | orchestrator | Tuesday 24 March 2026 03:01:34 +0000 (0:00:12.551) 0:02:28.322 ********* 2026-03-24 03:02:13.714426 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-24 03:02:13.714439 | orchestrator | 2026-03-24 03:02:13.714451 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-24 03:02:13.714464 | orchestrator | Tuesday 24 March 2026 03:02:01 +0000 (0:00:26.419) 0:02:54.741 ********* 2026-03-24 03:02:13.714476 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-24 03:02:13.714517 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-24 03:02:13.714528 | orchestrator | 2026-03-24 03:02:13.714539 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-24 03:02:13.714550 | orchestrator | Tuesday 24 March 2026 03:02:08 +0000 (0:00:07.239) 0:03:01.981 ********* 2026-03-24 03:02:13.714561 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:13.714571 | orchestrator | 2026-03-24 03:02:13.714582 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-24 03:02:13.714593 | orchestrator | Tuesday 24 March 2026 03:02:08 +0000 (0:00:00.134) 0:03:02.116 ********* 2026-03-24 03:02:13.714604 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:13.714615 | orchestrator | 2026-03-24 03:02:13.714625 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-24 03:02:13.714636 | orchestrator | Tuesday 24 March 2026 03:02:08 +0000 (0:00:00.130) 0:03:02.246 ********* 2026-03-24 03:02:13.714646 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:13.714657 | orchestrator | 2026-03-24 03:02:13.714682 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-24 03:02:13.714694 | orchestrator | Tuesday 24 March 2026 03:02:08 +0000 (0:00:00.146) 0:03:02.393 ********* 2026-03-24 03:02:13.714704 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:13.714715 | orchestrator | 2026-03-24 03:02:13.714726 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-24 03:02:13.714737 | orchestrator | Tuesday 24 March 2026 03:02:09 +0000 (0:00:00.509) 0:03:02.903 ********* 2026-03-24 03:02:13.714748 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:02:13.714758 | orchestrator | 2026-03-24 03:02:13.714769 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-24 03:02:13.714779 | orchestrator | Tuesday 24 March 2026 03:02:12 +0000 (0:00:03.488) 0:03:06.392 ********* 2026-03-24 03:02:13.714790 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:13.714801 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:02:13.714811 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:02:13.714822 | orchestrator | 2026-03-24 03:02:13.714833 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:02:13.714845 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 03:02:13.714857 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-24 03:02:13.714868 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-24 03:02:13.714879 | orchestrator | 2026-03-24 03:02:13.714893 | orchestrator | 2026-03-24 03:02:13.714913 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:02:13.714930 | orchestrator | Tuesday 24 March 2026 03:02:13 +0000 (0:00:00.450) 0:03:06.842 ********* 2026-03-24 03:02:13.714947 | orchestrator | =============================================================================== 2026-03-24 03:02:13.714965 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 43.13s 2026-03-24 03:02:13.714982 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.42s 2026-03-24 03:02:13.715030 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.30s 2026-03-24 03:02:13.715049 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.55s 2026-03-24 03:02:13.715068 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.38s 2026-03-24 03:02:13.715086 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.66s 2026-03-24 03:02:13.715104 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.64s 2026-03-24 03:02:13.715122 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.24s 2026-03-24 03:02:13.715156 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.00s 2026-03-24 03:02:13.715201 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.68s 2026-03-24 03:02:13.715214 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.64s 2026-03-24 03:02:13.715225 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.64s 2026-03-24 03:02:13.715239 | orchestrator | keystone : Creating default user role ----------------------------------- 3.49s 2026-03-24 03:02:13.715258 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.53s 2026-03-24 03:02:13.715276 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.45s 2026-03-24 03:02:13.715294 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.32s 2026-03-24 03:02:13.715306 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.16s 2026-03-24 03:02:13.715317 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.87s 2026-03-24 03:02:13.715327 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.81s 2026-03-24 03:02:13.715338 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.65s 2026-03-24 03:02:15.960982 | orchestrator | 2026-03-24 03:02:15 | INFO  | Task c59f2a6a-61d4-4fab-900b-a8cc9d133386 (placement) was prepared for execution. 2026-03-24 03:02:15.961121 | orchestrator | 2026-03-24 03:02:15 | INFO  | It takes a moment until task c59f2a6a-61d4-4fab-900b-a8cc9d133386 (placement) has been started and output is visible here. 2026-03-24 03:02:51.296505 | orchestrator | 2026-03-24 03:02:51.296651 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:02:51.296663 | orchestrator | 2026-03-24 03:02:51.296671 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:02:51.296679 | orchestrator | Tuesday 24 March 2026 03:02:19 +0000 (0:00:00.251) 0:00:00.251 ********* 2026-03-24 03:02:51.296685 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:02:51.296693 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:02:51.296701 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:02:51.296707 | orchestrator | 2026-03-24 03:02:51.296714 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:02:51.296720 | orchestrator | Tuesday 24 March 2026 03:02:20 +0000 (0:00:00.296) 0:00:00.547 ********* 2026-03-24 03:02:51.296728 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-24 03:02:51.296735 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-24 03:02:51.296741 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-24 03:02:51.296747 | orchestrator | 2026-03-24 03:02:51.296771 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-24 03:02:51.296778 | orchestrator | 2026-03-24 03:02:51.296785 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-24 03:02:51.296791 | orchestrator | Tuesday 24 March 2026 03:02:20 +0000 (0:00:00.424) 0:00:00.972 ********* 2026-03-24 03:02:51.296798 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:02:51.296806 | orchestrator | 2026-03-24 03:02:51.296812 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-24 03:02:51.296819 | orchestrator | Tuesday 24 March 2026 03:02:21 +0000 (0:00:00.509) 0:00:01.482 ********* 2026-03-24 03:02:51.296825 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-24 03:02:51.296831 | orchestrator | 2026-03-24 03:02:51.296838 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-24 03:02:51.296844 | orchestrator | Tuesday 24 March 2026 03:02:25 +0000 (0:00:03.861) 0:00:05.344 ********* 2026-03-24 03:02:51.296850 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-24 03:02:51.296880 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-24 03:02:51.296887 | orchestrator | 2026-03-24 03:02:51.296893 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-24 03:02:51.296899 | orchestrator | Tuesday 24 March 2026 03:02:31 +0000 (0:00:06.514) 0:00:11.858 ********* 2026-03-24 03:02:51.296906 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-24 03:02:51.296912 | orchestrator | 2026-03-24 03:02:51.296918 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-24 03:02:51.296924 | orchestrator | Tuesday 24 March 2026 03:02:35 +0000 (0:00:03.933) 0:00:15.792 ********* 2026-03-24 03:02:51.296931 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:02:51.296937 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-24 03:02:51.296966 | orchestrator | 2026-03-24 03:02:51.296973 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-24 03:02:51.296979 | orchestrator | Tuesday 24 March 2026 03:02:39 +0000 (0:00:04.199) 0:00:19.992 ********* 2026-03-24 03:02:51.296985 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:02:51.296993 | orchestrator | 2026-03-24 03:02:51.297001 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-24 03:02:51.297008 | orchestrator | Tuesday 24 March 2026 03:02:42 +0000 (0:00:03.268) 0:00:23.260 ********* 2026-03-24 03:02:51.297015 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-24 03:02:51.297023 | orchestrator | 2026-03-24 03:02:51.297030 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-24 03:02:51.297037 | orchestrator | Tuesday 24 March 2026 03:02:47 +0000 (0:00:04.286) 0:00:27.547 ********* 2026-03-24 03:02:51.297044 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:51.297051 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:02:51.297059 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:02:51.297066 | orchestrator | 2026-03-24 03:02:51.297073 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-24 03:02:51.297080 | orchestrator | Tuesday 24 March 2026 03:02:47 +0000 (0:00:00.290) 0:00:27.837 ********* 2026-03-24 03:02:51.297091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:02:51.297126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:02:51.297142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:02:51.297149 | orchestrator | 2026-03-24 03:02:51.297157 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-24 03:02:51.297165 | orchestrator | Tuesday 24 March 2026 03:02:48 +0000 (0:00:01.084) 0:00:28.922 ********* 2026-03-24 03:02:51.297172 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:51.297179 | orchestrator | 2026-03-24 03:02:51.297186 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-24 03:02:51.297193 | orchestrator | Tuesday 24 March 2026 03:02:48 +0000 (0:00:00.284) 0:00:29.206 ********* 2026-03-24 03:02:51.297200 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:51.297208 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:02:51.297215 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:02:51.297221 | orchestrator | 2026-03-24 03:02:51.297228 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-24 03:02:51.297235 | orchestrator | Tuesday 24 March 2026 03:02:49 +0000 (0:00:00.276) 0:00:29.483 ********* 2026-03-24 03:02:51.297243 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:02:51.297250 | orchestrator | 2026-03-24 03:02:51.297257 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-24 03:02:51.297264 | orchestrator | Tuesday 24 March 2026 03:02:49 +0000 (0:00:00.495) 0:00:29.978 ********* 2026-03-24 03:02:51.297272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:02:51.297287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:02:53.867852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:02:53.868099 | orchestrator | 2026-03-24 03:02:53.868119 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-24 03:02:53.868132 | orchestrator | Tuesday 24 March 2026 03:02:51 +0000 (0:00:01.568) 0:00:31.546 ********* 2026-03-24 03:02:53.868145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:02:53.868156 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:53.868168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:02:53.868178 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:02:53.868189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:02:53.868227 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:02:53.868237 | orchestrator | 2026-03-24 03:02:53.868248 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-24 03:02:53.868277 | orchestrator | Tuesday 24 March 2026 03:02:51 +0000 (0:00:00.457) 0:00:32.004 ********* 2026-03-24 03:02:53.868296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:02:53.868307 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:02:53.868318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:02:53.868331 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:02:53.868343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:02:53.868354 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:02:53.868366 | orchestrator | 2026-03-24 03:02:53.868377 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-24 03:02:53.868389 | orchestrator | Tuesday 24 March 2026 03:02:52 +0000 (0:00:00.612) 0:00:32.617 ********* 2026-03-24 03:02:53.868400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:02:53.868437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:00.694523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:00.694670 | orchestrator | 2026-03-24 03:03:00.694697 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-24 03:03:00.694717 | orchestrator | Tuesday 24 March 2026 03:02:53 +0000 (0:00:01.507) 0:00:34.125 ********* 2026-03-24 03:03:00.694736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:00.694756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:00.694827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:00.694844 | orchestrator | 2026-03-24 03:03:00.694859 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-24 03:03:00.694875 | orchestrator | Tuesday 24 March 2026 03:02:56 +0000 (0:00:02.167) 0:00:36.292 ********* 2026-03-24 03:03:00.694919 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-24 03:03:00.694968 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-24 03:03:00.694986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-24 03:03:00.695004 | orchestrator | 2026-03-24 03:03:00.695023 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-24 03:03:00.695044 | orchestrator | Tuesday 24 March 2026 03:02:57 +0000 (0:00:01.468) 0:00:37.761 ********* 2026-03-24 03:03:00.695066 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:03:00.695084 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:03:00.695095 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:03:00.695105 | orchestrator | 2026-03-24 03:03:00.695127 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-24 03:03:00.695151 | orchestrator | Tuesday 24 March 2026 03:02:58 +0000 (0:00:01.333) 0:00:39.095 ********* 2026-03-24 03:03:00.695177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:03:00.695206 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:03:00.695233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:03:00.695278 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:03:00.695306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-24 03:03:00.695333 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:03:00.695359 | orchestrator | 2026-03-24 03:03:00.695378 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-24 03:03:00.695412 | orchestrator | Tuesday 24 March 2026 03:02:59 +0000 (0:00:00.769) 0:00:39.865 ********* 2026-03-24 03:03:00.695452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:27.287365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:27.287482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-24 03:03:27.287492 | orchestrator | 2026-03-24 03:03:27.287501 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-24 03:03:27.287509 | orchestrator | Tuesday 24 March 2026 03:03:00 +0000 (0:00:01.088) 0:00:40.954 ********* 2026-03-24 03:03:27.287515 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:03:27.287522 | orchestrator | 2026-03-24 03:03:27.287529 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-24 03:03:27.287536 | orchestrator | Tuesday 24 March 2026 03:03:02 +0000 (0:00:02.142) 0:00:43.096 ********* 2026-03-24 03:03:27.287542 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:03:27.287548 | orchestrator | 2026-03-24 03:03:27.287554 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-24 03:03:27.287560 | orchestrator | Tuesday 24 March 2026 03:03:05 +0000 (0:00:02.250) 0:00:45.347 ********* 2026-03-24 03:03:27.287567 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:03:27.287573 | orchestrator | 2026-03-24 03:03:27.287579 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-24 03:03:27.287585 | orchestrator | Tuesday 24 March 2026 03:03:19 +0000 (0:00:14.322) 0:00:59.669 ********* 2026-03-24 03:03:27.287591 | orchestrator | 2026-03-24 03:03:27.287598 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-24 03:03:27.287604 | orchestrator | Tuesday 24 March 2026 03:03:19 +0000 (0:00:00.065) 0:00:59.734 ********* 2026-03-24 03:03:27.287610 | orchestrator | 2026-03-24 03:03:27.287616 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-24 03:03:27.287622 | orchestrator | Tuesday 24 March 2026 03:03:19 +0000 (0:00:00.064) 0:00:59.799 ********* 2026-03-24 03:03:27.287628 | orchestrator | 2026-03-24 03:03:27.287635 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-24 03:03:27.287641 | orchestrator | Tuesday 24 March 2026 03:03:19 +0000 (0:00:00.067) 0:00:59.866 ********* 2026-03-24 03:03:27.287647 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:03:27.287664 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:03:27.287670 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:03:27.287677 | orchestrator | 2026-03-24 03:03:27.287683 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:03:27.287690 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:03:27.287697 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:03:27.287703 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:03:27.287710 | orchestrator | 2026-03-24 03:03:27.287716 | orchestrator | 2026-03-24 03:03:27.287722 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:03:27.287728 | orchestrator | Tuesday 24 March 2026 03:03:26 +0000 (0:00:07.385) 0:01:07.252 ********* 2026-03-24 03:03:27.287739 | orchestrator | =============================================================================== 2026-03-24 03:03:27.287746 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.32s 2026-03-24 03:03:27.287765 | orchestrator | placement : Restart placement-api container ----------------------------- 7.39s 2026-03-24 03:03:27.287772 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.51s 2026-03-24 03:03:27.287778 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.29s 2026-03-24 03:03:27.287784 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.20s 2026-03-24 03:03:27.287791 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.93s 2026-03-24 03:03:27.287797 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.86s 2026-03-24 03:03:27.287803 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.27s 2026-03-24 03:03:27.287826 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.25s 2026-03-24 03:03:27.287839 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.17s 2026-03-24 03:03:27.287846 | orchestrator | placement : Creating placement databases -------------------------------- 2.14s 2026-03-24 03:03:27.287852 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2026-03-24 03:03:27.287858 | orchestrator | placement : Copying over config.json files for services ----------------- 1.51s 2026-03-24 03:03:27.287864 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.47s 2026-03-24 03:03:27.287870 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.33s 2026-03-24 03:03:27.287877 | orchestrator | placement : Check placement containers ---------------------------------- 1.09s 2026-03-24 03:03:27.287883 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.08s 2026-03-24 03:03:27.287955 | orchestrator | placement : Copying over existing policy file --------------------------- 0.77s 2026-03-24 03:03:27.287965 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.61s 2026-03-24 03:03:27.287973 | orchestrator | placement : include_tasks ----------------------------------------------- 0.51s 2026-03-24 03:03:29.501354 | orchestrator | 2026-03-24 03:03:29 | INFO  | Task cf85bf32-a2d8-46c9-923e-62e166782326 (neutron) was prepared for execution. 2026-03-24 03:03:29.501462 | orchestrator | 2026-03-24 03:03:29 | INFO  | It takes a moment until task cf85bf32-a2d8-46c9-923e-62e166782326 (neutron) has been started and output is visible here. 2026-03-24 03:04:16.782115 | orchestrator | 2026-03-24 03:04:16.782218 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:04:16.782230 | orchestrator | 2026-03-24 03:04:16.782237 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:04:16.782244 | orchestrator | Tuesday 24 March 2026 03:03:32 +0000 (0:00:00.188) 0:00:00.188 ********* 2026-03-24 03:04:16.782251 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:04:16.782259 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:04:16.782266 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:04:16.782273 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:04:16.782280 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:04:16.782286 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:04:16.782293 | orchestrator | 2026-03-24 03:04:16.782300 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:04:16.782306 | orchestrator | Tuesday 24 March 2026 03:03:33 +0000 (0:00:00.493) 0:00:00.681 ********* 2026-03-24 03:04:16.782313 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-24 03:04:16.782320 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-24 03:04:16.782327 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-24 03:04:16.782333 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-24 03:04:16.782339 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-24 03:04:16.782360 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-24 03:04:16.782364 | orchestrator | 2026-03-24 03:04:16.782368 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-24 03:04:16.782371 | orchestrator | 2026-03-24 03:04:16.782375 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-24 03:04:16.782379 | orchestrator | Tuesday 24 March 2026 03:03:33 +0000 (0:00:00.455) 0:00:01.137 ********* 2026-03-24 03:04:16.782394 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:04:16.782399 | orchestrator | 2026-03-24 03:04:16.782403 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-24 03:04:16.782406 | orchestrator | Tuesday 24 March 2026 03:03:34 +0000 (0:00:00.968) 0:00:02.105 ********* 2026-03-24 03:04:16.782410 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:04:16.782414 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:04:16.782418 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:04:16.782421 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:04:16.782425 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:04:16.782429 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:04:16.782433 | orchestrator | 2026-03-24 03:04:16.782436 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-24 03:04:16.782440 | orchestrator | Tuesday 24 March 2026 03:03:35 +0000 (0:00:01.167) 0:00:03.273 ********* 2026-03-24 03:04:16.782444 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:04:16.782448 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:04:16.782451 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:04:16.782455 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:04:16.782459 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:04:16.782462 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:04:16.782466 | orchestrator | 2026-03-24 03:04:16.782470 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-24 03:04:16.782473 | orchestrator | Tuesday 24 March 2026 03:03:36 +0000 (0:00:01.017) 0:00:04.291 ********* 2026-03-24 03:04:16.782477 | orchestrator | ok: [testbed-node-0] => { 2026-03-24 03:04:16.782482 | orchestrator |  "changed": false, 2026-03-24 03:04:16.782486 | orchestrator |  "msg": "All assertions passed" 2026-03-24 03:04:16.782490 | orchestrator | } 2026-03-24 03:04:16.782494 | orchestrator | ok: [testbed-node-1] => { 2026-03-24 03:04:16.782497 | orchestrator |  "changed": false, 2026-03-24 03:04:16.782501 | orchestrator |  "msg": "All assertions passed" 2026-03-24 03:04:16.782505 | orchestrator | } 2026-03-24 03:04:16.782508 | orchestrator | ok: [testbed-node-2] => { 2026-03-24 03:04:16.782512 | orchestrator |  "changed": false, 2026-03-24 03:04:16.782516 | orchestrator |  "msg": "All assertions passed" 2026-03-24 03:04:16.782519 | orchestrator | } 2026-03-24 03:04:16.782523 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 03:04:16.782527 | orchestrator |  "changed": false, 2026-03-24 03:04:16.782531 | orchestrator |  "msg": "All assertions passed" 2026-03-24 03:04:16.782534 | orchestrator | } 2026-03-24 03:04:16.782538 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 03:04:16.782542 | orchestrator |  "changed": false, 2026-03-24 03:04:16.782546 | orchestrator |  "msg": "All assertions passed" 2026-03-24 03:04:16.782550 | orchestrator | } 2026-03-24 03:04:16.782553 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 03:04:16.782557 | orchestrator |  "changed": false, 2026-03-24 03:04:16.782561 | orchestrator |  "msg": "All assertions passed" 2026-03-24 03:04:16.782564 | orchestrator | } 2026-03-24 03:04:16.782568 | orchestrator | 2026-03-24 03:04:16.782572 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-24 03:04:16.782576 | orchestrator | Tuesday 24 March 2026 03:03:37 +0000 (0:00:00.634) 0:00:04.925 ********* 2026-03-24 03:04:16.782579 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:16.782583 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:16.782587 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:16.782595 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:16.782599 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:16.782602 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:16.782606 | orchestrator | 2026-03-24 03:04:16.782610 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-24 03:04:16.782614 | orchestrator | Tuesday 24 March 2026 03:03:38 +0000 (0:00:00.545) 0:00:05.471 ********* 2026-03-24 03:04:16.782617 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-24 03:04:16.782621 | orchestrator | 2026-03-24 03:04:16.782625 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-24 03:04:16.782629 | orchestrator | Tuesday 24 March 2026 03:03:42 +0000 (0:00:03.931) 0:00:09.402 ********* 2026-03-24 03:04:16.782633 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-24 03:04:16.782637 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-24 03:04:16.782642 | orchestrator | 2026-03-24 03:04:16.782658 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-24 03:04:16.782662 | orchestrator | Tuesday 24 March 2026 03:03:48 +0000 (0:00:06.622) 0:00:16.024 ********* 2026-03-24 03:04:16.782667 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:04:16.782671 | orchestrator | 2026-03-24 03:04:16.782676 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-24 03:04:16.782681 | orchestrator | Tuesday 24 March 2026 03:03:51 +0000 (0:00:03.241) 0:00:19.266 ********* 2026-03-24 03:04:16.782685 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:04:16.782690 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-24 03:04:16.782694 | orchestrator | 2026-03-24 03:04:16.782698 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-24 03:04:16.782703 | orchestrator | Tuesday 24 March 2026 03:03:56 +0000 (0:00:04.214) 0:00:23.480 ********* 2026-03-24 03:04:16.782707 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:04:16.782712 | orchestrator | 2026-03-24 03:04:16.782716 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-24 03:04:16.782720 | orchestrator | Tuesday 24 March 2026 03:03:59 +0000 (0:00:03.263) 0:00:26.743 ********* 2026-03-24 03:04:16.782724 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-24 03:04:16.782728 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-24 03:04:16.782733 | orchestrator | 2026-03-24 03:04:16.782737 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-24 03:04:16.782741 | orchestrator | Tuesday 24 March 2026 03:04:07 +0000 (0:00:07.959) 0:00:34.703 ********* 2026-03-24 03:04:16.782745 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:16.782750 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:16.782754 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:16.782758 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:16.782763 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:16.782770 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:16.782775 | orchestrator | 2026-03-24 03:04:16.782779 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-24 03:04:16.782784 | orchestrator | Tuesday 24 March 2026 03:04:08 +0000 (0:00:00.745) 0:00:35.448 ********* 2026-03-24 03:04:16.782788 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:16.782792 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:16.782797 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:16.782801 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:16.782806 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:16.782810 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:16.782814 | orchestrator | 2026-03-24 03:04:16.782835 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-24 03:04:16.782840 | orchestrator | Tuesday 24 March 2026 03:04:10 +0000 (0:00:01.970) 0:00:37.419 ********* 2026-03-24 03:04:16.782848 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:04:16.782852 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:04:16.782858 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:04:16.782864 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:04:16.782870 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:04:16.782876 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:04:16.782884 | orchestrator | 2026-03-24 03:04:16.782894 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-24 03:04:16.782900 | orchestrator | Tuesday 24 March 2026 03:04:12 +0000 (0:00:02.116) 0:00:39.536 ********* 2026-03-24 03:04:16.782906 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:16.782912 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:16.782918 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:16.782924 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:16.782931 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:16.782937 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:16.782943 | orchestrator | 2026-03-24 03:04:16.782948 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-24 03:04:16.782954 | orchestrator | Tuesday 24 March 2026 03:04:14 +0000 (0:00:02.081) 0:00:41.618 ********* 2026-03-24 03:04:16.782962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:16.782979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:22.112781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:22.112927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:22.112936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:22.112941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:22.112945 | orchestrator | 2026-03-24 03:04:22.112950 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-24 03:04:22.112955 | orchestrator | Tuesday 24 March 2026 03:04:16 +0000 (0:00:02.499) 0:00:44.117 ********* 2026-03-24 03:04:22.112959 | orchestrator | [WARNING]: Skipped 2026-03-24 03:04:22.112964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-24 03:04:22.112969 | orchestrator | due to this access issue: 2026-03-24 03:04:22.112974 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-24 03:04:22.112978 | orchestrator | a directory 2026-03-24 03:04:22.112982 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:04:22.112986 | orchestrator | 2026-03-24 03:04:22.112990 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-24 03:04:22.112993 | orchestrator | Tuesday 24 March 2026 03:04:17 +0000 (0:00:00.790) 0:00:44.908 ********* 2026-03-24 03:04:22.112998 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:04:22.113003 | orchestrator | 2026-03-24 03:04:22.113007 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-24 03:04:22.113021 | orchestrator | Tuesday 24 March 2026 03:04:18 +0000 (0:00:01.278) 0:00:46.187 ********* 2026-03-24 03:04:22.113029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:22.113037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:22.113041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:22.113045 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:22.113053 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:26.548748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:26.548907 | orchestrator | 2026-03-24 03:04:26.548926 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-24 03:04:26.548938 | orchestrator | Tuesday 24 March 2026 03:04:22 +0000 (0:00:03.256) 0:00:49.443 ********* 2026-03-24 03:04:26.548951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:26.548962 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:26.548974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:26.548984 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:26.548994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:26.549004 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:26.549055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:26.549067 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:26.549083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:26.549094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:26.549105 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:26.549114 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:26.549124 | orchestrator | 2026-03-24 03:04:26.549135 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-24 03:04:26.549145 | orchestrator | Tuesday 24 March 2026 03:04:23 +0000 (0:00:01.782) 0:00:51.225 ********* 2026-03-24 03:04:26.549155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:26.549165 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:26.549182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:31.306615 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:31.306738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:31.306757 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:31.306769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:31.306781 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:31.306789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:31.306796 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:31.306827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:31.306849 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:31.306855 | orchestrator | 2026-03-24 03:04:31.306862 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-24 03:04:31.306869 | orchestrator | Tuesday 24 March 2026 03:04:26 +0000 (0:00:02.659) 0:00:53.884 ********* 2026-03-24 03:04:31.306874 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:31.306880 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:31.306885 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:31.306891 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:31.306896 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:31.306901 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:31.306907 | orchestrator | 2026-03-24 03:04:31.306912 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-24 03:04:31.306918 | orchestrator | Tuesday 24 March 2026 03:04:28 +0000 (0:00:02.111) 0:00:55.996 ********* 2026-03-24 03:04:31.306923 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:31.306929 | orchestrator | 2026-03-24 03:04:31.306934 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-24 03:04:31.306955 | orchestrator | Tuesday 24 March 2026 03:04:28 +0000 (0:00:00.128) 0:00:56.125 ********* 2026-03-24 03:04:31.306960 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:31.306966 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:31.306971 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:31.306977 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:31.306982 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:31.306987 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:31.306993 | orchestrator | 2026-03-24 03:04:31.306998 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-24 03:04:31.307004 | orchestrator | Tuesday 24 March 2026 03:04:29 +0000 (0:00:00.550) 0:00:56.675 ********* 2026-03-24 03:04:31.307014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:31.307020 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:31.307026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:31.307036 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:31.307043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:31.307049 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:31.307054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:31.307060 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:31.307073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:38.404421 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:38.404565 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:38.404599 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:38.404614 | orchestrator | 2026-03-24 03:04:38.404627 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-24 03:04:38.404640 | orchestrator | Tuesday 24 March 2026 03:04:31 +0000 (0:00:01.964) 0:00:58.640 ********* 2026-03-24 03:04:38.404653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:38.404692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:38.404705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:38.404751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:38.404765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:38.404784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:38.404898 | orchestrator | 2026-03-24 03:04:38.404914 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-24 03:04:38.404928 | orchestrator | Tuesday 24 March 2026 03:04:33 +0000 (0:00:02.580) 0:01:01.221 ********* 2026-03-24 03:04:38.404941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:38.404956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:38.404989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:42.474000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:42.474167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:42.474181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:04:42.474192 | orchestrator | 2026-03-24 03:04:42.474202 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-24 03:04:42.474213 | orchestrator | Tuesday 24 March 2026 03:04:38 +0000 (0:00:04.518) 0:01:05.740 ********* 2026-03-24 03:04:42.474223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:42.474246 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:42.474273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:42.474290 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:42.474299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:04:42.474308 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:42.474317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:42.474326 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:42.474336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:42.474351 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:42.474373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:42.474389 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:42.474405 | orchestrator | 2026-03-24 03:04:42.474418 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-24 03:04:42.474434 | orchestrator | Tuesday 24 March 2026 03:04:40 +0000 (0:00:01.789) 0:01:07.529 ********* 2026-03-24 03:04:42.474443 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:42.474451 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:42.474460 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:42.474468 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:04:42.474477 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:04:42.474486 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:04:42.474495 | orchestrator | 2026-03-24 03:04:42.474504 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-24 03:04:42.474519 | orchestrator | Tuesday 24 March 2026 03:04:42 +0000 (0:00:02.278) 0:01:09.808 ********* 2026-03-24 03:04:59.017020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:59.017132 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:59.017159 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:59.017168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:04:59.017177 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:59.017187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:59.017252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:59.017264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:04:59.017273 | orchestrator | 2026-03-24 03:04:59.017283 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-24 03:04:59.017293 | orchestrator | Tuesday 24 March 2026 03:04:45 +0000 (0:00:03.100) 0:01:12.908 ********* 2026-03-24 03:04:59.017302 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:59.017310 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:59.017318 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:59.017327 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017336 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:59.017344 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:59.017352 | orchestrator | 2026-03-24 03:04:59.017361 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-24 03:04:59.017370 | orchestrator | Tuesday 24 March 2026 03:04:47 +0000 (0:00:02.151) 0:01:15.060 ********* 2026-03-24 03:04:59.017378 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:59.017387 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:59.017395 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:59.017404 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017413 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:59.017422 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:59.017430 | orchestrator | 2026-03-24 03:04:59.017439 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-24 03:04:59.017447 | orchestrator | Tuesday 24 March 2026 03:04:49 +0000 (0:00:02.159) 0:01:17.219 ********* 2026-03-24 03:04:59.017456 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:59.017465 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:59.017473 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:59.017482 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017491 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:59.017499 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:59.017508 | orchestrator | 2026-03-24 03:04:59.017516 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-24 03:04:59.017533 | orchestrator | Tuesday 24 March 2026 03:04:51 +0000 (0:00:02.077) 0:01:19.296 ********* 2026-03-24 03:04:59.017541 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:59.017550 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:59.017559 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:59.017567 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017576 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:59.017585 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:59.017594 | orchestrator | 2026-03-24 03:04:59.017603 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-24 03:04:59.017611 | orchestrator | Tuesday 24 March 2026 03:04:53 +0000 (0:00:02.027) 0:01:21.324 ********* 2026-03-24 03:04:59.017620 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:59.017629 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:59.017637 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:59.017646 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:59.017654 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:59.017663 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017672 | orchestrator | 2026-03-24 03:04:59.017680 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-24 03:04:59.017689 | orchestrator | Tuesday 24 March 2026 03:04:55 +0000 (0:00:01.718) 0:01:23.042 ********* 2026-03-24 03:04:59.017697 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:59.017706 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:59.017715 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:59.017724 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017735 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:04:59.017742 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:04:59.017748 | orchestrator | 2026-03-24 03:04:59.017754 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-24 03:04:59.017760 | orchestrator | Tuesday 24 March 2026 03:04:57 +0000 (0:00:01.659) 0:01:24.701 ********* 2026-03-24 03:04:59.017798 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-24 03:04:59.017804 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:04:59.017810 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-24 03:04:59.017816 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:04:59.017823 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-24 03:04:59.017829 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:04:59.017835 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-24 03:04:59.017841 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:04:59.017854 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-24 03:05:02.504267 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:02.504368 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-24 03:05:02.504384 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:02.504396 | orchestrator | 2026-03-24 03:05:02.504408 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-24 03:05:02.504420 | orchestrator | Tuesday 24 March 2026 03:04:59 +0000 (0:00:01.645) 0:01:26.346 ********* 2026-03-24 03:05:02.504435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:02.504474 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:02.504487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:02.504499 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:02.504510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:02.504521 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:02.504548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:02.504561 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:02.504601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:02.504646 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:02.504667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:02.504686 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:02.504705 | orchestrator | 2026-03-24 03:05:02.504725 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-24 03:05:02.504743 | orchestrator | Tuesday 24 March 2026 03:05:00 +0000 (0:00:01.810) 0:01:28.157 ********* 2026-03-24 03:05:02.504791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:02.504813 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:02.504843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:02.504863 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:02.504898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:25.849362 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.849459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:25.849474 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.849482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:25.849494 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.849507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:25.849520 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.849532 | orchestrator | 2026-03-24 03:05:25.849545 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-24 03:05:25.849576 | orchestrator | Tuesday 24 March 2026 03:05:02 +0000 (0:00:01.684) 0:01:29.841 ********* 2026-03-24 03:05:25.849588 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.849610 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.849622 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.849634 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.849647 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.849659 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.849671 | orchestrator | 2026-03-24 03:05:25.849699 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-24 03:05:25.849707 | orchestrator | Tuesday 24 March 2026 03:05:04 +0000 (0:00:01.683) 0:01:31.525 ********* 2026-03-24 03:05:25.849715 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.849722 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.849794 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.849804 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:05:25.849811 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:05:25.849825 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:05:25.849838 | orchestrator | 2026-03-24 03:05:25.849851 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-24 03:05:25.849887 | orchestrator | Tuesday 24 March 2026 03:05:07 +0000 (0:00:02.969) 0:01:34.494 ********* 2026-03-24 03:05:25.849901 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.849914 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.849927 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.849941 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.849955 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.849968 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.849979 | orchestrator | 2026-03-24 03:05:25.849988 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-24 03:05:25.849997 | orchestrator | Tuesday 24 March 2026 03:05:09 +0000 (0:00:01.920) 0:01:36.415 ********* 2026-03-24 03:05:25.850005 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.850059 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.850068 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850077 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.850085 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.850093 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.850102 | orchestrator | 2026-03-24 03:05:25.850111 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-24 03:05:25.850163 | orchestrator | Tuesday 24 March 2026 03:05:10 +0000 (0:00:01.838) 0:01:38.253 ********* 2026-03-24 03:05:25.850173 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850180 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.850187 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.850194 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.850202 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.850209 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.850216 | orchestrator | 2026-03-24 03:05:25.850224 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-24 03:05:25.850231 | orchestrator | Tuesday 24 March 2026 03:05:12 +0000 (0:00:01.901) 0:01:40.155 ********* 2026-03-24 03:05:25.850238 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850245 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.850252 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.850260 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.850267 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.850285 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.850292 | orchestrator | 2026-03-24 03:05:25.850308 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-24 03:05:25.850315 | orchestrator | Tuesday 24 March 2026 03:05:14 +0000 (0:00:02.040) 0:01:42.195 ********* 2026-03-24 03:05:25.850323 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850330 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.850338 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.850345 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.850352 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.850359 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.850366 | orchestrator | 2026-03-24 03:05:25.850374 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-24 03:05:25.850381 | orchestrator | Tuesday 24 March 2026 03:05:17 +0000 (0:00:02.271) 0:01:44.467 ********* 2026-03-24 03:05:25.850388 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850396 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.850403 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.850410 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.850417 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.850424 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.850431 | orchestrator | 2026-03-24 03:05:25.850439 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-24 03:05:25.850446 | orchestrator | Tuesday 24 March 2026 03:05:19 +0000 (0:00:02.299) 0:01:46.766 ********* 2026-03-24 03:05:25.850453 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850469 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.850476 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.850483 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.850490 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.850498 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.850505 | orchestrator | 2026-03-24 03:05:25.850512 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-24 03:05:25.850520 | orchestrator | Tuesday 24 March 2026 03:05:21 +0000 (0:00:02.308) 0:01:49.075 ********* 2026-03-24 03:05:25.850527 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-24 03:05:25.850536 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:25.850543 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-24 03:05:25.850550 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-24 03:05:25.850558 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:25.850565 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850572 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-24 03:05:25.850580 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:25.850587 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-24 03:05:25.850595 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:25.850602 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-24 03:05:25.850615 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:25.850623 | orchestrator | 2026-03-24 03:05:25.850630 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-24 03:05:25.850637 | orchestrator | Tuesday 24 March 2026 03:05:23 +0000 (0:00:01.943) 0:01:51.019 ********* 2026-03-24 03:05:25.850646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:25.850656 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:05:25.850670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:28.150339 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:05:28.150451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-24 03:05:28.150465 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:05:28.150475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:28.150483 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:05:28.150502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:28.150510 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:05:28.150518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 03:05:28.150525 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:05:28.150533 | orchestrator | 2026-03-24 03:05:28.150542 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-24 03:05:28.150551 | orchestrator | Tuesday 24 March 2026 03:05:25 +0000 (0:00:02.155) 0:01:53.174 ********* 2026-03-24 03:05:28.150572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:05:28.150588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:05:28.150599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-24 03:05:28.150607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:05:28.150615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:05:28.150633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-24 03:07:41.141361 | orchestrator | 2026-03-24 03:07:41.141472 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-24 03:07:41.141485 | orchestrator | Tuesday 24 March 2026 03:05:28 +0000 (0:00:02.311) 0:01:55.486 ********* 2026-03-24 03:07:41.141493 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:07:41.141501 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:07:41.141509 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:07:41.141514 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:07:41.141518 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:07:41.141522 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:07:41.141527 | orchestrator | 2026-03-24 03:07:41.141532 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-24 03:07:41.141536 | orchestrator | Tuesday 24 March 2026 03:05:28 +0000 (0:00:00.628) 0:01:56.115 ********* 2026-03-24 03:07:41.141541 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:07:41.141545 | orchestrator | 2026-03-24 03:07:41.141550 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-24 03:07:41.141554 | orchestrator | Tuesday 24 March 2026 03:05:30 +0000 (0:00:02.139) 0:01:58.254 ********* 2026-03-24 03:07:41.141558 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:07:41.141562 | orchestrator | 2026-03-24 03:07:41.141566 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-24 03:07:41.141571 | orchestrator | Tuesday 24 March 2026 03:05:33 +0000 (0:00:02.231) 0:02:00.486 ********* 2026-03-24 03:07:41.141575 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:07:41.141579 | orchestrator | 2026-03-24 03:07:41.141583 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-24 03:07:41.141588 | orchestrator | Tuesday 24 March 2026 03:06:18 +0000 (0:00:45.576) 0:02:46.063 ********* 2026-03-24 03:07:41.141630 | orchestrator | 2026-03-24 03:07:41.141635 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-24 03:07:41.141640 | orchestrator | Tuesday 24 March 2026 03:06:18 +0000 (0:00:00.075) 0:02:46.138 ********* 2026-03-24 03:07:41.141644 | orchestrator | 2026-03-24 03:07:41.141648 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-24 03:07:41.141652 | orchestrator | Tuesday 24 March 2026 03:06:18 +0000 (0:00:00.079) 0:02:46.218 ********* 2026-03-24 03:07:41.141656 | orchestrator | 2026-03-24 03:07:41.141660 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-24 03:07:41.141665 | orchestrator | Tuesday 24 March 2026 03:06:18 +0000 (0:00:00.073) 0:02:46.291 ********* 2026-03-24 03:07:41.141669 | orchestrator | 2026-03-24 03:07:41.141685 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-24 03:07:41.141690 | orchestrator | Tuesday 24 March 2026 03:06:19 +0000 (0:00:00.069) 0:02:46.360 ********* 2026-03-24 03:07:41.141694 | orchestrator | 2026-03-24 03:07:41.141698 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-24 03:07:41.141702 | orchestrator | Tuesday 24 March 2026 03:06:19 +0000 (0:00:00.069) 0:02:46.430 ********* 2026-03-24 03:07:41.141706 | orchestrator | 2026-03-24 03:07:41.141710 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-24 03:07:41.141715 | orchestrator | Tuesday 24 March 2026 03:06:19 +0000 (0:00:00.068) 0:02:46.499 ********* 2026-03-24 03:07:41.141738 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:07:41.141742 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:07:41.141746 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:07:41.141750 | orchestrator | 2026-03-24 03:07:41.141754 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-24 03:07:41.141758 | orchestrator | Tuesday 24 March 2026 03:06:41 +0000 (0:00:22.496) 0:03:08.995 ********* 2026-03-24 03:07:41.141762 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:07:41.141767 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:07:41.141771 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:07:41.141775 | orchestrator | 2026-03-24 03:07:41.141779 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:07:41.141784 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-24 03:07:41.141790 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-24 03:07:41.141795 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-24 03:07:41.141799 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-24 03:07:41.141803 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-24 03:07:41.141807 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-24 03:07:41.141811 | orchestrator | 2026-03-24 03:07:41.141816 | orchestrator | 2026-03-24 03:07:41.141820 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:07:41.141824 | orchestrator | Tuesday 24 March 2026 03:07:40 +0000 (0:00:59.079) 0:04:08.075 ********* 2026-03-24 03:07:41.141828 | orchestrator | =============================================================================== 2026-03-24 03:07:41.141832 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 59.08s 2026-03-24 03:07:41.141836 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.58s 2026-03-24 03:07:41.141840 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.50s 2026-03-24 03:07:41.141856 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.96s 2026-03-24 03:07:41.141861 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.62s 2026-03-24 03:07:41.141865 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.52s 2026-03-24 03:07:41.141869 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.21s 2026-03-24 03:07:41.141873 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.93s 2026-03-24 03:07:41.141877 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.26s 2026-03-24 03:07:41.141881 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.26s 2026-03-24 03:07:41.141886 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.24s 2026-03-24 03:07:41.141890 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.10s 2026-03-24 03:07:41.141894 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 2.97s 2026-03-24 03:07:41.141898 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.66s 2026-03-24 03:07:41.141902 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.58s 2026-03-24 03:07:41.141906 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.50s 2026-03-24 03:07:41.141915 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.31s 2026-03-24 03:07:41.141920 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.31s 2026-03-24 03:07:41.141925 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 2.30s 2026-03-24 03:07:41.141930 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.28s 2026-03-24 03:07:43.406995 | orchestrator | 2026-03-24 03:07:43 | INFO  | Task b428721f-c258-46f5-ae90-dd29b110ded4 (nova) was prepared for execution. 2026-03-24 03:07:43.407120 | orchestrator | 2026-03-24 03:07:43 | INFO  | It takes a moment until task b428721f-c258-46f5-ae90-dd29b110ded4 (nova) has been started and output is visible here. 2026-03-24 03:09:45.565514 | orchestrator | 2026-03-24 03:09:45.565706 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:09:45.565720 | orchestrator | 2026-03-24 03:09:45.565725 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-24 03:09:45.565731 | orchestrator | Tuesday 24 March 2026 03:07:47 +0000 (0:00:00.214) 0:00:00.214 ********* 2026-03-24 03:09:45.565736 | orchestrator | changed: [testbed-manager] 2026-03-24 03:09:45.565742 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.565748 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:09:45.565756 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:09:45.565763 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:09:45.565771 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:09:45.565778 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:09:45.565785 | orchestrator | 2026-03-24 03:09:45.565792 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:09:45.565799 | orchestrator | Tuesday 24 March 2026 03:07:48 +0000 (0:00:00.620) 0:00:00.834 ********* 2026-03-24 03:09:45.565807 | orchestrator | changed: [testbed-manager] 2026-03-24 03:09:45.565814 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.565821 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:09:45.565828 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:09:45.565835 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:09:45.565842 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:09:45.565850 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:09:45.565858 | orchestrator | 2026-03-24 03:09:45.565867 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:09:45.565875 | orchestrator | Tuesday 24 March 2026 03:07:48 +0000 (0:00:00.688) 0:00:01.522 ********* 2026-03-24 03:09:45.565883 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-24 03:09:45.565891 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-24 03:09:45.565898 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-24 03:09:45.565903 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-24 03:09:45.565908 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-24 03:09:45.565912 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-24 03:09:45.565917 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-24 03:09:45.565921 | orchestrator | 2026-03-24 03:09:45.565926 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-24 03:09:45.565931 | orchestrator | 2026-03-24 03:09:45.565938 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-24 03:09:45.565945 | orchestrator | Tuesday 24 March 2026 03:07:49 +0000 (0:00:00.669) 0:00:02.192 ********* 2026-03-24 03:09:45.565953 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:09:45.565960 | orchestrator | 2026-03-24 03:09:45.565967 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-24 03:09:45.565974 | orchestrator | Tuesday 24 March 2026 03:07:50 +0000 (0:00:00.604) 0:00:02.797 ********* 2026-03-24 03:09:45.565983 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-24 03:09:45.566011 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-24 03:09:45.566063 | orchestrator | 2026-03-24 03:09:45.566068 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-24 03:09:45.566074 | orchestrator | Tuesday 24 March 2026 03:07:54 +0000 (0:00:04.277) 0:00:07.074 ********* 2026-03-24 03:09:45.566121 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 03:09:45.566127 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 03:09:45.566132 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566138 | orchestrator | 2026-03-24 03:09:45.566144 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-24 03:09:45.566149 | orchestrator | Tuesday 24 March 2026 03:07:58 +0000 (0:00:04.252) 0:00:11.326 ********* 2026-03-24 03:09:45.566154 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566160 | orchestrator | 2026-03-24 03:09:45.566166 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-24 03:09:45.566171 | orchestrator | Tuesday 24 March 2026 03:07:59 +0000 (0:00:00.644) 0:00:11.971 ********* 2026-03-24 03:09:45.566177 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566182 | orchestrator | 2026-03-24 03:09:45.566187 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-24 03:09:45.566192 | orchestrator | Tuesday 24 March 2026 03:08:00 +0000 (0:00:01.242) 0:00:13.214 ********* 2026-03-24 03:09:45.566197 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566203 | orchestrator | 2026-03-24 03:09:45.566208 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-24 03:09:45.566213 | orchestrator | Tuesday 24 March 2026 03:08:02 +0000 (0:00:02.475) 0:00:15.689 ********* 2026-03-24 03:09:45.566219 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:09:45.566224 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566229 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566234 | orchestrator | 2026-03-24 03:09:45.566239 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-24 03:09:45.566244 | orchestrator | Tuesday 24 March 2026 03:08:03 +0000 (0:00:00.291) 0:00:15.980 ********* 2026-03-24 03:09:45.566249 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:09:45.566255 | orchestrator | 2026-03-24 03:09:45.566260 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-24 03:09:45.566265 | orchestrator | Tuesday 24 March 2026 03:08:37 +0000 (0:00:33.907) 0:00:49.888 ********* 2026-03-24 03:09:45.566270 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566276 | orchestrator | 2026-03-24 03:09:45.566281 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-24 03:09:45.566286 | orchestrator | Tuesday 24 March 2026 03:08:53 +0000 (0:00:15.885) 0:01:05.773 ********* 2026-03-24 03:09:45.566291 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:09:45.566296 | orchestrator | 2026-03-24 03:09:45.566302 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-24 03:09:45.566307 | orchestrator | Tuesday 24 March 2026 03:09:05 +0000 (0:00:12.721) 0:01:18.495 ********* 2026-03-24 03:09:45.566327 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:09:45.566333 | orchestrator | 2026-03-24 03:09:45.566344 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-24 03:09:45.566350 | orchestrator | Tuesday 24 March 2026 03:09:06 +0000 (0:00:00.645) 0:01:19.141 ********* 2026-03-24 03:09:45.566355 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:09:45.566361 | orchestrator | 2026-03-24 03:09:45.566365 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-24 03:09:45.566370 | orchestrator | Tuesday 24 March 2026 03:09:06 +0000 (0:00:00.459) 0:01:19.600 ********* 2026-03-24 03:09:45.566375 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:09:45.566380 | orchestrator | 2026-03-24 03:09:45.566384 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-24 03:09:45.566396 | orchestrator | Tuesday 24 March 2026 03:09:07 +0000 (0:00:00.649) 0:01:20.249 ********* 2026-03-24 03:09:45.566400 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:09:45.566405 | orchestrator | 2026-03-24 03:09:45.566429 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-24 03:09:45.566435 | orchestrator | Tuesday 24 March 2026 03:09:26 +0000 (0:00:19.218) 0:01:39.467 ********* 2026-03-24 03:09:45.566439 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:09:45.566444 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566448 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566453 | orchestrator | 2026-03-24 03:09:45.566457 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-24 03:09:45.566462 | orchestrator | 2026-03-24 03:09:45.566466 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-24 03:09:45.566471 | orchestrator | Tuesday 24 March 2026 03:09:27 +0000 (0:00:00.297) 0:01:39.764 ********* 2026-03-24 03:09:45.566475 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:09:45.566480 | orchestrator | 2026-03-24 03:09:45.566484 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-24 03:09:45.566489 | orchestrator | Tuesday 24 March 2026 03:09:27 +0000 (0:00:00.720) 0:01:40.485 ********* 2026-03-24 03:09:45.566493 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566498 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566502 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566507 | orchestrator | 2026-03-24 03:09:45.566511 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-24 03:09:45.566516 | orchestrator | Tuesday 24 March 2026 03:09:29 +0000 (0:00:02.076) 0:01:42.561 ********* 2026-03-24 03:09:45.566520 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566525 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566529 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566556 | orchestrator | 2026-03-24 03:09:45.566564 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-24 03:09:45.566571 | orchestrator | Tuesday 24 March 2026 03:09:32 +0000 (0:00:02.223) 0:01:44.785 ********* 2026-03-24 03:09:45.566579 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:09:45.566585 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566590 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566595 | orchestrator | 2026-03-24 03:09:45.566599 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-24 03:09:45.566604 | orchestrator | Tuesday 24 March 2026 03:09:32 +0000 (0:00:00.480) 0:01:45.266 ********* 2026-03-24 03:09:45.566608 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-24 03:09:45.566613 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566617 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-24 03:09:45.566621 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566626 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-24 03:09:45.566631 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-24 03:09:45.566635 | orchestrator | 2026-03-24 03:09:45.566657 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-24 03:09:45.566666 | orchestrator | Tuesday 24 March 2026 03:09:40 +0000 (0:00:07.783) 0:01:53.050 ********* 2026-03-24 03:09:45.566674 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:09:45.566681 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566688 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566695 | orchestrator | 2026-03-24 03:09:45.566701 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-24 03:09:45.566708 | orchestrator | Tuesday 24 March 2026 03:09:40 +0000 (0:00:00.318) 0:01:53.368 ********* 2026-03-24 03:09:45.566715 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-24 03:09:45.566722 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:09:45.566729 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-24 03:09:45.566743 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566751 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-24 03:09:45.566758 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566765 | orchestrator | 2026-03-24 03:09:45.566773 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-24 03:09:45.566780 | orchestrator | Tuesday 24 March 2026 03:09:41 +0000 (0:00:00.986) 0:01:54.355 ********* 2026-03-24 03:09:45.566788 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566796 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566803 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566810 | orchestrator | 2026-03-24 03:09:45.566815 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-24 03:09:45.566819 | orchestrator | Tuesday 24 March 2026 03:09:42 +0000 (0:00:00.482) 0:01:54.838 ********* 2026-03-24 03:09:45.566824 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566828 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566833 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:09:45.566859 | orchestrator | 2026-03-24 03:09:45.566864 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-24 03:09:45.566868 | orchestrator | Tuesday 24 March 2026 03:09:43 +0000 (0:00:01.004) 0:01:55.842 ********* 2026-03-24 03:09:45.566873 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:09:45.566877 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:09:45.566889 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:11:07.428416 | orchestrator | 2026-03-24 03:11:07.428527 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-24 03:11:07.428537 | orchestrator | Tuesday 24 March 2026 03:09:45 +0000 (0:00:02.433) 0:01:58.276 ********* 2026-03-24 03:11:07.428541 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:07.428547 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:07.428551 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:11:07.428556 | orchestrator | 2026-03-24 03:11:07.428560 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-24 03:11:07.428565 | orchestrator | Tuesday 24 March 2026 03:10:08 +0000 (0:00:22.615) 0:02:20.891 ********* 2026-03-24 03:11:07.428569 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:07.428573 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:07.428577 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:11:07.428581 | orchestrator | 2026-03-24 03:11:07.428586 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-24 03:11:07.428590 | orchestrator | Tuesday 24 March 2026 03:10:21 +0000 (0:00:13.039) 0:02:33.931 ********* 2026-03-24 03:11:07.428593 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:11:07.428597 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:07.428601 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:07.428605 | orchestrator | 2026-03-24 03:11:07.428609 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-24 03:11:07.428613 | orchestrator | Tuesday 24 March 2026 03:10:22 +0000 (0:00:01.023) 0:02:34.954 ********* 2026-03-24 03:11:07.428617 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:07.428622 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:07.428626 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:11:07.428630 | orchestrator | 2026-03-24 03:11:07.428634 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-24 03:11:07.428638 | orchestrator | Tuesday 24 March 2026 03:10:35 +0000 (0:00:13.274) 0:02:48.229 ********* 2026-03-24 03:11:07.428642 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:07.428645 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:07.428649 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:07.428653 | orchestrator | 2026-03-24 03:11:07.428657 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-24 03:11:07.428661 | orchestrator | Tuesday 24 March 2026 03:10:36 +0000 (0:00:01.002) 0:02:49.231 ********* 2026-03-24 03:11:07.428680 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:07.428684 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:07.428688 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:07.428692 | orchestrator | 2026-03-24 03:11:07.428696 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-24 03:11:07.428700 | orchestrator | 2026-03-24 03:11:07.428704 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-24 03:11:07.428708 | orchestrator | Tuesday 24 March 2026 03:10:36 +0000 (0:00:00.320) 0:02:49.551 ********* 2026-03-24 03:11:07.428740 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:11:07.428745 | orchestrator | 2026-03-24 03:11:07.428749 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-24 03:11:07.428753 | orchestrator | Tuesday 24 March 2026 03:10:37 +0000 (0:00:00.681) 0:02:50.233 ********* 2026-03-24 03:11:07.428758 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-24 03:11:07.428762 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-24 03:11:07.428766 | orchestrator | 2026-03-24 03:11:07.428770 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-24 03:11:07.428774 | orchestrator | Tuesday 24 March 2026 03:10:41 +0000 (0:00:03.559) 0:02:53.793 ********* 2026-03-24 03:11:07.428778 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-24 03:11:07.428784 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-24 03:11:07.428788 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-24 03:11:07.428792 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-24 03:11:07.428797 | orchestrator | 2026-03-24 03:11:07.428801 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-24 03:11:07.428805 | orchestrator | Tuesday 24 March 2026 03:10:47 +0000 (0:00:06.586) 0:03:00.379 ********* 2026-03-24 03:11:07.428809 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:11:07.428813 | orchestrator | 2026-03-24 03:11:07.428817 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-24 03:11:07.428820 | orchestrator | Tuesday 24 March 2026 03:10:50 +0000 (0:00:03.312) 0:03:03.692 ********* 2026-03-24 03:11:07.428824 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:11:07.428829 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-24 03:11:07.428833 | orchestrator | 2026-03-24 03:11:07.428837 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-24 03:11:07.428841 | orchestrator | Tuesday 24 March 2026 03:10:54 +0000 (0:00:04.020) 0:03:07.713 ********* 2026-03-24 03:11:07.428845 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:11:07.428849 | orchestrator | 2026-03-24 03:11:07.428853 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-24 03:11:07.428857 | orchestrator | Tuesday 24 March 2026 03:10:58 +0000 (0:00:03.192) 0:03:10.905 ********* 2026-03-24 03:11:07.428860 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-24 03:11:07.428864 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-24 03:11:07.428868 | orchestrator | 2026-03-24 03:11:07.428872 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-24 03:11:07.428889 | orchestrator | Tuesday 24 March 2026 03:11:06 +0000 (0:00:07.920) 0:03:18.826 ********* 2026-03-24 03:11:07.428897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:07.428909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:07.428914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:07.428926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:11.895977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:11.896123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:11.896139 | orchestrator | 2026-03-24 03:11:11.896152 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-24 03:11:11.896164 | orchestrator | Tuesday 24 March 2026 03:11:07 +0000 (0:00:01.318) 0:03:20.144 ********* 2026-03-24 03:11:11.896174 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:11.896188 | orchestrator | 2026-03-24 03:11:11.896205 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-24 03:11:11.896221 | orchestrator | Tuesday 24 March 2026 03:11:07 +0000 (0:00:00.121) 0:03:20.266 ********* 2026-03-24 03:11:11.896236 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:11.896251 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:11.896267 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:11.896285 | orchestrator | 2026-03-24 03:11:11.896301 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-24 03:11:11.896318 | orchestrator | Tuesday 24 March 2026 03:11:07 +0000 (0:00:00.312) 0:03:20.578 ********* 2026-03-24 03:11:11.896334 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:11:11.896350 | orchestrator | 2026-03-24 03:11:11.896367 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-24 03:11:11.896383 | orchestrator | Tuesday 24 March 2026 03:11:08 +0000 (0:00:00.689) 0:03:21.268 ********* 2026-03-24 03:11:11.896399 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:11.896414 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:11.896429 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:11.896444 | orchestrator | 2026-03-24 03:11:11.896460 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-24 03:11:11.896544 | orchestrator | Tuesday 24 March 2026 03:11:09 +0000 (0:00:00.490) 0:03:21.758 ********* 2026-03-24 03:11:11.896566 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:11:11.896583 | orchestrator | 2026-03-24 03:11:11.896602 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-24 03:11:11.896621 | orchestrator | Tuesday 24 March 2026 03:11:09 +0000 (0:00:00.536) 0:03:22.295 ********* 2026-03-24 03:11:11.896666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:11.896748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:11.896773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:11.896794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:11.896814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:11.896849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:11.896866 | orchestrator | 2026-03-24 03:11:11.896893 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-24 03:11:13.510266 | orchestrator | Tuesday 24 March 2026 03:11:11 +0000 (0:00:02.317) 0:03:24.613 ********* 2026-03-24 03:11:13.510347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:13.510361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:13.510371 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:13.510380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:13.510406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:13.510440 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:13.510487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:13.510498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:13.510505 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:13.510512 | orchestrator | 2026-03-24 03:11:13.510521 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-24 03:11:13.510529 | orchestrator | Tuesday 24 March 2026 03:11:12 +0000 (0:00:00.819) 0:03:25.433 ********* 2026-03-24 03:11:13.510536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:13.510551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:13.510556 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:13.510571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:15.856066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:15.856171 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:15.856192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:15.856235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:15.856248 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:15.856260 | orchestrator | 2026-03-24 03:11:15.856272 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-24 03:11:15.856285 | orchestrator | Tuesday 24 March 2026 03:11:13 +0000 (0:00:00.795) 0:03:26.229 ********* 2026-03-24 03:11:15.856312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:15.856342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:15.856354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:15.856374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:15.856393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:15.856409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:21.854101 | orchestrator | 2026-03-24 03:11:21.854181 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-24 03:11:21.854191 | orchestrator | Tuesday 24 March 2026 03:11:15 +0000 (0:00:02.344) 0:03:28.573 ********* 2026-03-24 03:11:21.854201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:21.854229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:21.854247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:21.854269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:21.854282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:21.854303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:21.854314 | orchestrator | 2026-03-24 03:11:21.854323 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-24 03:11:21.854331 | orchestrator | Tuesday 24 March 2026 03:11:21 +0000 (0:00:05.456) 0:03:34.030 ********* 2026-03-24 03:11:21.854346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:21.854357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:21.854367 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:21.854387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:25.832048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:25.832139 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:25.832151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-24 03:11:25.832175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:11:25.832182 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:25.832188 | orchestrator | 2026-03-24 03:11:25.832195 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-24 03:11:25.832205 | orchestrator | Tuesday 24 March 2026 03:11:21 +0000 (0:00:00.542) 0:03:34.572 ********* 2026-03-24 03:11:25.832215 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:11:25.832227 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:11:25.832241 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:11:25.832252 | orchestrator | 2026-03-24 03:11:25.832262 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-24 03:11:25.832271 | orchestrator | Tuesday 24 March 2026 03:11:23 +0000 (0:00:01.467) 0:03:36.040 ********* 2026-03-24 03:11:25.832282 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:11:25.832291 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:11:25.832301 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:11:25.832310 | orchestrator | 2026-03-24 03:11:25.832319 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-24 03:11:25.832327 | orchestrator | Tuesday 24 March 2026 03:11:23 +0000 (0:00:00.287) 0:03:36.327 ********* 2026-03-24 03:11:25.832355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:25.832390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:25.832409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-24 03:11:25.832421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:25.832438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:11:25.832450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:04.005020 | orchestrator | 2026-03-24 03:12:04.005147 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-24 03:12:04.005163 | orchestrator | Tuesday 24 March 2026 03:11:25 +0000 (0:00:01.835) 0:03:38.162 ********* 2026-03-24 03:12:04.005173 | orchestrator | 2026-03-24 03:12:04.005184 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-24 03:12:04.005194 | orchestrator | Tuesday 24 March 2026 03:11:25 +0000 (0:00:00.128) 0:03:38.290 ********* 2026-03-24 03:12:04.005203 | orchestrator | 2026-03-24 03:12:04.005213 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-24 03:12:04.005223 | orchestrator | Tuesday 24 March 2026 03:11:25 +0000 (0:00:00.125) 0:03:38.416 ********* 2026-03-24 03:12:04.005233 | orchestrator | 2026-03-24 03:12:04.005242 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-24 03:12:04.005252 | orchestrator | Tuesday 24 March 2026 03:11:25 +0000 (0:00:00.129) 0:03:38.545 ********* 2026-03-24 03:12:04.005262 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:12:04.005272 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:12:04.005282 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:12:04.005291 | orchestrator | 2026-03-24 03:12:04.005301 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-24 03:12:04.005311 | orchestrator | Tuesday 24 March 2026 03:11:41 +0000 (0:00:16.050) 0:03:54.596 ********* 2026-03-24 03:12:04.005321 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:12:04.005330 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:12:04.005340 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:12:04.005349 | orchestrator | 2026-03-24 03:12:04.005359 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-24 03:12:04.005369 | orchestrator | 2026-03-24 03:12:04.005378 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-24 03:12:04.005388 | orchestrator | Tuesday 24 March 2026 03:11:51 +0000 (0:00:10.105) 0:04:04.701 ********* 2026-03-24 03:12:04.005399 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:12:04.005410 | orchestrator | 2026-03-24 03:12:04.005420 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-24 03:12:04.005549 | orchestrator | Tuesday 24 March 2026 03:11:53 +0000 (0:00:01.134) 0:04:05.836 ********* 2026-03-24 03:12:04.005578 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:04.005595 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:04.005613 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:04.005649 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:04.005662 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:04.005673 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:04.005685 | orchestrator | 2026-03-24 03:12:04.005696 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-24 03:12:04.005708 | orchestrator | Tuesday 24 March 2026 03:11:53 +0000 (0:00:00.697) 0:04:06.533 ********* 2026-03-24 03:12:04.005720 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:04.005731 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:04.005743 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:04.005755 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:12:04.005767 | orchestrator | 2026-03-24 03:12:04.005779 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-24 03:12:04.005790 | orchestrator | Tuesday 24 March 2026 03:11:54 +0000 (0:00:00.834) 0:04:07.368 ********* 2026-03-24 03:12:04.005803 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-24 03:12:04.005817 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-24 03:12:04.005834 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-24 03:12:04.005860 | orchestrator | 2026-03-24 03:12:04.005878 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-24 03:12:04.005894 | orchestrator | Tuesday 24 March 2026 03:11:55 +0000 (0:00:00.852) 0:04:08.221 ********* 2026-03-24 03:12:04.005910 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-24 03:12:04.005926 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-24 03:12:04.005942 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-24 03:12:04.005955 | orchestrator | 2026-03-24 03:12:04.005970 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-24 03:12:04.006088 | orchestrator | Tuesday 24 March 2026 03:11:56 +0000 (0:00:01.182) 0:04:09.403 ********* 2026-03-24 03:12:04.006101 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-24 03:12:04.006111 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:04.006121 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-24 03:12:04.006130 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:04.006150 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-24 03:12:04.006160 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:04.006170 | orchestrator | 2026-03-24 03:12:04.006180 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-24 03:12:04.006189 | orchestrator | Tuesday 24 March 2026 03:11:57 +0000 (0:00:00.545) 0:04:09.949 ********* 2026-03-24 03:12:04.006199 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 03:12:04.006209 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-24 03:12:04.006219 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 03:12:04.006229 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:04.006239 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 03:12:04.006248 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 03:12:04.006258 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:04.006268 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 03:12:04.006298 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 03:12:04.006309 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:04.006319 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-24 03:12:04.006328 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-24 03:12:04.006338 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-24 03:12:04.006367 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-24 03:12:04.006383 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-24 03:12:04.006401 | orchestrator | 2026-03-24 03:12:04.006418 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-24 03:12:04.006435 | orchestrator | Tuesday 24 March 2026 03:11:59 +0000 (0:00:02.002) 0:04:11.952 ********* 2026-03-24 03:12:04.006568 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:04.006588 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:04.006604 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:04.006619 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:12:04.006634 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:12:04.006651 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:12:04.006667 | orchestrator | 2026-03-24 03:12:04.006683 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-24 03:12:04.006699 | orchestrator | Tuesday 24 March 2026 03:12:00 +0000 (0:00:01.141) 0:04:13.094 ********* 2026-03-24 03:12:04.006714 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:04.006729 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:04.006746 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:04.006763 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:12:04.006779 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:12:04.006795 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:12:04.006812 | orchestrator | 2026-03-24 03:12:04.006827 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-24 03:12:04.006845 | orchestrator | Tuesday 24 March 2026 03:12:02 +0000 (0:00:01.859) 0:04:14.953 ********* 2026-03-24 03:12:04.006878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:04.006905 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:04.006939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610406 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:05.610702 | orchestrator | 2026-03-24 03:12:05.610717 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-24 03:12:05.610731 | orchestrator | Tuesday 24 March 2026 03:12:04 +0000 (0:00:02.187) 0:04:17.140 ********* 2026-03-24 03:12:05.610745 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:12:05.610759 | orchestrator | 2026-03-24 03:12:05.610772 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-24 03:12:05.610793 | orchestrator | Tuesday 24 March 2026 03:12:05 +0000 (0:00:01.189) 0:04:18.330 ********* 2026-03-24 03:12:09.067119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:09.067302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:10.690920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:10.691039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:10.691057 | orchestrator | 2026-03-24 03:12:10.691068 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-24 03:12:10.691077 | orchestrator | Tuesday 24 March 2026 03:12:09 +0000 (0:00:03.801) 0:04:22.131 ********* 2026-03-24 03:12:10.691088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:10.691118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:10.691127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:12:10.691133 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:10.691161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:10.691170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:10.691178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:12:10.691194 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:10.691202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:10.691210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:10.691226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:12:13.022122 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:13.022328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:12:13.022356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:12:13.022396 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:13.022409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:12:13.022423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:12:13.022437 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:13.022517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:12:13.022531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:12:13.022546 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:13.022561 | orchestrator | 2026-03-24 03:12:13.022575 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-24 03:12:13.022591 | orchestrator | Tuesday 24 March 2026 03:12:10 +0000 (0:00:01.480) 0:04:23.612 ********* 2026-03-24 03:12:13.022640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:13.022670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:13.022687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:12:13.022702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:13.022716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:13.022741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:12:19.364231 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:19.364432 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:19.364513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:19.364551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:19.364570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:12:19.364588 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:19.364607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:12:19.364626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:12:19.364643 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:19.364708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:12:19.364741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:12:19.364760 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:19.364779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:12:19.364797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:12:19.364815 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:19.364833 | orchestrator | 2026-03-24 03:12:19.364853 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-24 03:12:19.364872 | orchestrator | Tuesday 24 March 2026 03:12:13 +0000 (0:00:02.126) 0:04:25.738 ********* 2026-03-24 03:12:19.364885 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:19.364902 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:19.364920 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:19.364938 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:12:19.364955 | orchestrator | 2026-03-24 03:12:19.364970 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-24 03:12:19.364980 | orchestrator | Tuesday 24 March 2026 03:12:13 +0000 (0:00:00.879) 0:04:26.618 ********* 2026-03-24 03:12:19.364995 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:12:19.365012 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:12:19.365028 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:12:19.365045 | orchestrator | 2026-03-24 03:12:19.365059 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-24 03:12:19.365069 | orchestrator | Tuesday 24 March 2026 03:12:14 +0000 (0:00:01.015) 0:04:27.633 ********* 2026-03-24 03:12:19.365079 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:12:19.365088 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:12:19.365098 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:12:19.365107 | orchestrator | 2026-03-24 03:12:19.365117 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-24 03:12:19.365126 | orchestrator | Tuesday 24 March 2026 03:12:15 +0000 (0:00:00.878) 0:04:28.512 ********* 2026-03-24 03:12:19.365143 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:12:19.365154 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:12:19.365165 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:12:19.365180 | orchestrator | 2026-03-24 03:12:19.365194 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-24 03:12:19.365204 | orchestrator | Tuesday 24 March 2026 03:12:16 +0000 (0:00:00.511) 0:04:29.023 ********* 2026-03-24 03:12:19.365214 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:12:19.365224 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:12:19.365233 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:12:19.365243 | orchestrator | 2026-03-24 03:12:19.365252 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-24 03:12:19.365265 | orchestrator | Tuesday 24 March 2026 03:12:16 +0000 (0:00:00.479) 0:04:29.503 ********* 2026-03-24 03:12:19.365282 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-24 03:12:19.365299 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-24 03:12:19.365315 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-24 03:12:19.365332 | orchestrator | 2026-03-24 03:12:19.365348 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-24 03:12:19.365364 | orchestrator | Tuesday 24 March 2026 03:12:18 +0000 (0:00:01.339) 0:04:30.842 ********* 2026-03-24 03:12:19.365400 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-24 03:12:36.886425 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-24 03:12:36.886622 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-24 03:12:36.886652 | orchestrator | 2026-03-24 03:12:36.886672 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-24 03:12:36.886692 | orchestrator | Tuesday 24 March 2026 03:12:19 +0000 (0:00:01.239) 0:04:32.082 ********* 2026-03-24 03:12:36.886708 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-24 03:12:36.886727 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-24 03:12:36.886745 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-24 03:12:36.886762 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-24 03:12:36.886781 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-24 03:12:36.886799 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-24 03:12:36.886819 | orchestrator | 2026-03-24 03:12:36.886838 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-24 03:12:36.886857 | orchestrator | Tuesday 24 March 2026 03:12:23 +0000 (0:00:03.659) 0:04:35.741 ********* 2026-03-24 03:12:36.886868 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:36.886880 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:36.886891 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:36.886902 | orchestrator | 2026-03-24 03:12:36.886913 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-24 03:12:36.886924 | orchestrator | Tuesday 24 March 2026 03:12:23 +0000 (0:00:00.295) 0:04:36.037 ********* 2026-03-24 03:12:36.886935 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:36.886950 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:36.886969 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:36.886986 | orchestrator | 2026-03-24 03:12:36.887005 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-24 03:12:36.887022 | orchestrator | Tuesday 24 March 2026 03:12:23 +0000 (0:00:00.455) 0:04:36.493 ********* 2026-03-24 03:12:36.887039 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:12:36.887058 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:12:36.887076 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:12:36.887096 | orchestrator | 2026-03-24 03:12:36.887115 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-24 03:12:36.887133 | orchestrator | Tuesday 24 March 2026 03:12:24 +0000 (0:00:01.226) 0:04:37.720 ********* 2026-03-24 03:12:36.887153 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-24 03:12:36.887193 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-24 03:12:36.887205 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-24 03:12:36.887217 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-24 03:12:36.887229 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-24 03:12:36.887240 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-24 03:12:36.887250 | orchestrator | 2026-03-24 03:12:36.887261 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-24 03:12:36.887273 | orchestrator | Tuesday 24 March 2026 03:12:28 +0000 (0:00:03.174) 0:04:40.894 ********* 2026-03-24 03:12:36.887284 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-24 03:12:36.887295 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-24 03:12:36.887305 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-24 03:12:36.887316 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-24 03:12:36.887327 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:12:36.887337 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-24 03:12:36.887348 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:12:36.887359 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-24 03:12:36.887369 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:12:36.887380 | orchestrator | 2026-03-24 03:12:36.887391 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-24 03:12:36.887401 | orchestrator | Tuesday 24 March 2026 03:12:31 +0000 (0:00:03.185) 0:04:44.079 ********* 2026-03-24 03:12:36.887412 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:36.887423 | orchestrator | 2026-03-24 03:12:36.887463 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-24 03:12:36.887476 | orchestrator | Tuesday 24 March 2026 03:12:31 +0000 (0:00:00.125) 0:04:44.205 ********* 2026-03-24 03:12:36.887487 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:36.887497 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:36.887508 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:36.887519 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:36.887529 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:36.887539 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:36.887550 | orchestrator | 2026-03-24 03:12:36.887561 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-24 03:12:36.887572 | orchestrator | Tuesday 24 March 2026 03:12:32 +0000 (0:00:00.744) 0:04:44.949 ********* 2026-03-24 03:12:36.887582 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:12:36.887593 | orchestrator | 2026-03-24 03:12:36.887603 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-24 03:12:36.887614 | orchestrator | Tuesday 24 March 2026 03:12:32 +0000 (0:00:00.629) 0:04:45.579 ********* 2026-03-24 03:12:36.887641 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:12:36.887674 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:12:36.887686 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:12:36.887697 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:12:36.887707 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:12:36.887718 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:12:36.887729 | orchestrator | 2026-03-24 03:12:36.887740 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-24 03:12:36.887751 | orchestrator | Tuesday 24 March 2026 03:12:33 +0000 (0:00:00.710) 0:04:46.290 ********* 2026-03-24 03:12:36.887778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:36.887803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:36.887823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:12:36.887842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:36.887882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880713 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:42.880729 | orchestrator | 2026-03-24 03:12:42.880736 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-24 03:12:42.880746 | orchestrator | Tuesday 24 March 2026 03:12:37 +0000 (0:00:03.569) 0:04:49.859 ********* 2026-03-24 03:12:42.880754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:42.880764 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:42.880778 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:43.118949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:43.119017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:12:43.119024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:12:43.119029 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:12:43.119114 | orchestrator | 2026-03-24 03:12:43.119119 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-24 03:12:43.119128 | orchestrator | Tuesday 24 March 2026 03:12:43 +0000 (0:00:05.979) 0:04:55.839 ********* 2026-03-24 03:13:03.072801 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:13:03.072889 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:13:03.072897 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:13:03.072903 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.072908 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.072914 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.072919 | orchestrator | 2026-03-24 03:13:03.072926 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-24 03:13:03.072933 | orchestrator | Tuesday 24 March 2026 03:12:44 +0000 (0:00:01.196) 0:04:57.036 ********* 2026-03-24 03:13:03.072938 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-24 03:13:03.072944 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-24 03:13:03.072949 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-24 03:13:03.072955 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-24 03:13:03.072960 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-24 03:13:03.072967 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-24 03:13:03.072972 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.072977 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.072982 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-24 03:13:03.072987 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-24 03:13:03.072992 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-24 03:13:03.072997 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.073003 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-24 03:13:03.073008 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-24 03:13:03.073031 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-24 03:13:03.073037 | orchestrator | 2026-03-24 03:13:03.073043 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-24 03:13:03.073048 | orchestrator | Tuesday 24 March 2026 03:12:47 +0000 (0:00:03.297) 0:05:00.334 ********* 2026-03-24 03:13:03.073053 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:13:03.073058 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:13:03.073063 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:13:03.073069 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.073074 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.073079 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.073084 | orchestrator | 2026-03-24 03:13:03.073089 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-24 03:13:03.073094 | orchestrator | Tuesday 24 March 2026 03:12:48 +0000 (0:00:00.571) 0:05:00.905 ********* 2026-03-24 03:13:03.073099 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-24 03:13:03.073105 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-24 03:13:03.073110 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-24 03:13:03.073115 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-24 03:13:03.073120 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-24 03:13:03.073125 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-24 03:13:03.073141 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-24 03:13:03.073147 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-24 03:13:03.073152 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-24 03:13:03.073157 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-24 03:13:03.073162 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.073167 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-24 03:13:03.073172 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.073177 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-24 03:13:03.073182 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.073187 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-24 03:13:03.073193 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-24 03:13:03.073210 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-24 03:13:03.073215 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-24 03:13:03.073220 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-24 03:13:03.073225 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-24 03:13:03.073230 | orchestrator | 2026-03-24 03:13:03.073236 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-24 03:13:03.073241 | orchestrator | Tuesday 24 March 2026 03:12:53 +0000 (0:00:05.332) 0:05:06.238 ********* 2026-03-24 03:13:03.073251 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 03:13:03.073256 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 03:13:03.073261 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 03:13:03.073268 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-24 03:13:03.073277 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-24 03:13:03.073285 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-24 03:13:03.073295 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-24 03:13:03.073303 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-24 03:13:03.073311 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-24 03:13:03.073319 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 03:13:03.073328 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 03:13:03.073336 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 03:13:03.073344 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-24 03:13:03.073353 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.073361 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-24 03:13:03.073371 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.073380 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-24 03:13:03.073390 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-24 03:13:03.073399 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.073409 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-24 03:13:03.073459 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-24 03:13:03.073471 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-24 03:13:03.073480 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-24 03:13:03.073489 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-24 03:13:03.073499 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-24 03:13:03.073507 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-24 03:13:03.073516 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-24 03:13:03.073524 | orchestrator | 2026-03-24 03:13:03.073536 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-24 03:13:03.073542 | orchestrator | Tuesday 24 March 2026 03:12:59 +0000 (0:00:06.471) 0:05:12.709 ********* 2026-03-24 03:13:03.073549 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:13:03.073555 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:13:03.073561 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:13:03.073567 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.073574 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.073580 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.073586 | orchestrator | 2026-03-24 03:13:03.073591 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-24 03:13:03.073596 | orchestrator | Tuesday 24 March 2026 03:13:00 +0000 (0:00:00.618) 0:05:13.327 ********* 2026-03-24 03:13:03.073602 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:13:03.073612 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:13:03.073617 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:13:03.073622 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.073627 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.073632 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.073638 | orchestrator | 2026-03-24 03:13:03.073643 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-24 03:13:03.073648 | orchestrator | Tuesday 24 March 2026 03:13:01 +0000 (0:00:00.518) 0:05:13.846 ********* 2026-03-24 03:13:03.073653 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:03.073658 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:03.073664 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:03.073669 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:13:03.073674 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:13:03.073679 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:13:03.073684 | orchestrator | 2026-03-24 03:13:03.073696 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-24 03:13:04.197478 | orchestrator | Tuesday 24 March 2026 03:13:03 +0000 (0:00:01.935) 0:05:15.781 ********* 2026-03-24 03:13:04.197575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:13:04.197592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:13:04.197605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:13:04.197616 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:13:04.197644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:13:04.197674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:13:04.197702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:13:04.197713 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:13:04.197723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-24 03:13:04.197733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-24 03:13:04.197744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-24 03:13:04.197795 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:13:04.197807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:13:04.197825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:13:07.388396 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:07.388518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:13:07.388528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:13:07.388532 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:07.388536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-24 03:13:07.388541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:13:07.388560 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:07.388565 | orchestrator | 2026-03-24 03:13:07.388569 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-24 03:13:07.388574 | orchestrator | Tuesday 24 March 2026 03:13:04 +0000 (0:00:01.213) 0:05:16.994 ********* 2026-03-24 03:13:07.388579 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-24 03:13:07.388583 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-24 03:13:07.388597 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:13:07.388601 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-24 03:13:07.388605 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-24 03:13:07.388608 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:13:07.388612 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-24 03:13:07.388616 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-24 03:13:07.388620 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:13:07.388623 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-24 03:13:07.388627 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-24 03:13:07.388631 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:07.388634 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-24 03:13:07.388638 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-24 03:13:07.388642 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:07.388646 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-24 03:13:07.388649 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-24 03:13:07.388653 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:07.388657 | orchestrator | 2026-03-24 03:13:07.388661 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-24 03:13:07.388665 | orchestrator | Tuesday 24 March 2026 03:13:05 +0000 (0:00:00.752) 0:05:17.746 ********* 2026-03-24 03:13:07.388679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:13:07.388685 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:13:07.388693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-24 03:13:07.388700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:13:07.388704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:13:07.388713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-24 03:13:57.369916 | orchestrator | 2026-03-24 03:13:57.369926 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-24 03:13:57.369935 | orchestrator | Tuesday 24 March 2026 03:13:07 +0000 (0:00:02.579) 0:05:20.326 ********* 2026-03-24 03:13:57.369944 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:13:57.369952 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:13:57.369960 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:13:57.369968 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:13:57.369976 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:13:57.369999 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:13:57.370071 | orchestrator | 2026-03-24 03:13:57.370082 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-24 03:13:57.370091 | orchestrator | Tuesday 24 March 2026 03:13:08 +0000 (0:00:00.791) 0:05:21.117 ********* 2026-03-24 03:13:57.370100 | orchestrator | 2026-03-24 03:13:57.370109 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-24 03:13:57.370118 | orchestrator | Tuesday 24 March 2026 03:13:08 +0000 (0:00:00.134) 0:05:21.252 ********* 2026-03-24 03:13:57.370127 | orchestrator | 2026-03-24 03:13:57.370137 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-24 03:13:57.370151 | orchestrator | Tuesday 24 March 2026 03:13:08 +0000 (0:00:00.131) 0:05:21.384 ********* 2026-03-24 03:13:57.370161 | orchestrator | 2026-03-24 03:13:57.370171 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-24 03:13:57.370181 | orchestrator | Tuesday 24 March 2026 03:13:08 +0000 (0:00:00.133) 0:05:21.517 ********* 2026-03-24 03:13:57.370190 | orchestrator | 2026-03-24 03:13:57.370199 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-24 03:13:57.370208 | orchestrator | Tuesday 24 March 2026 03:13:08 +0000 (0:00:00.133) 0:05:21.651 ********* 2026-03-24 03:13:57.370217 | orchestrator | 2026-03-24 03:13:57.370227 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-24 03:13:57.370236 | orchestrator | Tuesday 24 March 2026 03:13:09 +0000 (0:00:00.281) 0:05:21.932 ********* 2026-03-24 03:13:57.370245 | orchestrator | 2026-03-24 03:13:57.370254 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-24 03:13:57.370263 | orchestrator | Tuesday 24 March 2026 03:13:09 +0000 (0:00:00.133) 0:05:22.065 ********* 2026-03-24 03:13:57.370271 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:13:57.370280 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:13:57.370289 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:13:57.370298 | orchestrator | 2026-03-24 03:13:57.370307 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-24 03:13:57.370317 | orchestrator | Tuesday 24 March 2026 03:13:15 +0000 (0:00:06.343) 0:05:28.409 ********* 2026-03-24 03:13:57.370326 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:13:57.370335 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:13:57.370344 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:13:57.370354 | orchestrator | 2026-03-24 03:13:57.370363 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-24 03:13:57.370379 | orchestrator | Tuesday 24 March 2026 03:13:32 +0000 (0:00:17.132) 0:05:45.541 ********* 2026-03-24 03:13:57.370388 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:13:57.370418 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:13:57.370431 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:13:57.370444 | orchestrator | 2026-03-24 03:13:57.370466 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-24 03:16:11.325222 | orchestrator | Tuesday 24 March 2026 03:13:57 +0000 (0:00:24.537) 0:06:10.079 ********* 2026-03-24 03:16:11.325316 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:16:11.325328 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:16:11.325335 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:16:11.325368 | orchestrator | 2026-03-24 03:16:11.325376 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-24 03:16:11.325383 | orchestrator | Tuesday 24 March 2026 03:14:35 +0000 (0:00:38.078) 0:06:48.158 ********* 2026-03-24 03:16:11.325390 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:16:11.325396 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:16:11.325402 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:16:11.325409 | orchestrator | 2026-03-24 03:16:11.325415 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-24 03:16:11.325422 | orchestrator | Tuesday 24 March 2026 03:14:36 +0000 (0:00:00.773) 0:06:48.931 ********* 2026-03-24 03:16:11.325428 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:16:11.325435 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:16:11.325441 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:16:11.325447 | orchestrator | 2026-03-24 03:16:11.325453 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-24 03:16:11.325460 | orchestrator | Tuesday 24 March 2026 03:14:36 +0000 (0:00:00.756) 0:06:49.688 ********* 2026-03-24 03:16:11.325470 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:16:11.325480 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:16:11.325505 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:16:11.325525 | orchestrator | 2026-03-24 03:16:11.325537 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-24 03:16:11.325549 | orchestrator | Tuesday 24 March 2026 03:15:04 +0000 (0:00:27.404) 0:07:17.092 ********* 2026-03-24 03:16:11.325559 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:16:11.325570 | orchestrator | 2026-03-24 03:16:11.325581 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-24 03:16:11.325592 | orchestrator | Tuesday 24 March 2026 03:15:04 +0000 (0:00:00.124) 0:07:17.216 ********* 2026-03-24 03:16:11.325603 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:16:11.325614 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:16:11.325624 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:11.325638 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:11.325670 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:11.325681 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-24 03:16:11.325692 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 03:16:11.325711 | orchestrator | 2026-03-24 03:16:11.325720 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-24 03:16:11.325729 | orchestrator | Tuesday 24 March 2026 03:15:26 +0000 (0:00:21.893) 0:07:39.109 ********* 2026-03-24 03:16:11.325738 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:16:11.325747 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:11.325757 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:16:11.325767 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:11.325777 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:16:11.325787 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:11.325798 | orchestrator | 2026-03-24 03:16:11.325809 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-24 03:16:11.325855 | orchestrator | Tuesday 24 March 2026 03:15:34 +0000 (0:00:07.846) 0:07:46.956 ********* 2026-03-24 03:16:11.325864 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:16:11.325872 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:16:11.325879 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:11.325886 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:11.325894 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:11.325902 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-24 03:16:11.325909 | orchestrator | 2026-03-24 03:16:11.325930 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-24 03:16:11.325938 | orchestrator | Tuesday 24 March 2026 03:15:37 +0000 (0:00:03.394) 0:07:50.351 ********* 2026-03-24 03:16:11.325946 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 03:16:11.325953 | orchestrator | 2026-03-24 03:16:11.325960 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-24 03:16:11.325967 | orchestrator | Tuesday 24 March 2026 03:15:51 +0000 (0:00:13.843) 0:08:04.195 ********* 2026-03-24 03:16:11.325975 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 03:16:11.325982 | orchestrator | 2026-03-24 03:16:11.325989 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-24 03:16:11.325996 | orchestrator | Tuesday 24 March 2026 03:15:52 +0000 (0:00:01.440) 0:08:05.635 ********* 2026-03-24 03:16:11.326003 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:16:11.326010 | orchestrator | 2026-03-24 03:16:11.326062 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-24 03:16:11.326069 | orchestrator | Tuesday 24 March 2026 03:15:54 +0000 (0:00:01.546) 0:08:07.181 ********* 2026-03-24 03:16:11.326076 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 03:16:11.326082 | orchestrator | 2026-03-24 03:16:11.326088 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-24 03:16:11.326094 | orchestrator | Tuesday 24 March 2026 03:16:06 +0000 (0:00:11.768) 0:08:18.950 ********* 2026-03-24 03:16:11.326101 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:16:11.326108 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:16:11.326114 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:16:11.326120 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:11.326127 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:11.326133 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:11.326139 | orchestrator | 2026-03-24 03:16:11.326145 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-24 03:16:11.326151 | orchestrator | 2026-03-24 03:16:11.326158 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-24 03:16:11.326180 | orchestrator | Tuesday 24 March 2026 03:16:07 +0000 (0:00:01.701) 0:08:20.652 ********* 2026-03-24 03:16:11.326187 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:16:11.326193 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:16:11.326199 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:16:11.326205 | orchestrator | 2026-03-24 03:16:11.326212 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-24 03:16:11.326218 | orchestrator | 2026-03-24 03:16:11.326224 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-24 03:16:11.326230 | orchestrator | Tuesday 24 March 2026 03:16:08 +0000 (0:00:00.938) 0:08:21.590 ********* 2026-03-24 03:16:11.326236 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:11.326242 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:11.326249 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:11.326255 | orchestrator | 2026-03-24 03:16:11.326261 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-24 03:16:11.326267 | orchestrator | 2026-03-24 03:16:11.326273 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-24 03:16:11.326280 | orchestrator | Tuesday 24 March 2026 03:16:09 +0000 (0:00:00.667) 0:08:22.258 ********* 2026-03-24 03:16:11.326293 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-24 03:16:11.326300 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-24 03:16:11.326306 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-24 03:16:11.326313 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-24 03:16:11.326319 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-24 03:16:11.326325 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-24 03:16:11.326332 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:16:11.326338 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-24 03:16:11.326360 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-24 03:16:11.326366 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-24 03:16:11.326373 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-24 03:16:11.326379 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-24 03:16:11.326385 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-24 03:16:11.326391 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:16:11.326398 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-24 03:16:11.326406 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-24 03:16:11.326416 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-24 03:16:11.326427 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-24 03:16:11.326435 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-24 03:16:11.326444 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-24 03:16:11.326454 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:16:11.326460 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-24 03:16:11.326536 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-24 03:16:11.326544 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-24 03:16:11.326551 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-24 03:16:11.326557 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-24 03:16:11.326563 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-24 03:16:11.326569 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:11.326575 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-24 03:16:11.326581 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-24 03:16:11.326593 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-24 03:16:11.326599 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-24 03:16:11.326605 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-24 03:16:11.326612 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-24 03:16:11.326618 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:11.326624 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-24 03:16:11.326630 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-24 03:16:11.326636 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-24 03:16:11.326642 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-24 03:16:11.326648 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-24 03:16:11.326654 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-24 03:16:11.326660 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:11.326666 | orchestrator | 2026-03-24 03:16:11.326673 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-24 03:16:11.326679 | orchestrator | 2026-03-24 03:16:11.326685 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-24 03:16:11.326697 | orchestrator | Tuesday 24 March 2026 03:16:10 +0000 (0:00:01.266) 0:08:23.524 ********* 2026-03-24 03:16:11.326703 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-24 03:16:11.326710 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-24 03:16:11.326716 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:11.326722 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-24 03:16:11.326728 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-24 03:16:11.326734 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:11.326740 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-24 03:16:11.326746 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-24 03:16:11.326752 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:11.326759 | orchestrator | 2026-03-24 03:16:11.326772 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-24 03:16:12.893642 | orchestrator | 2026-03-24 03:16:12.893740 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-24 03:16:12.893752 | orchestrator | Tuesday 24 March 2026 03:16:11 +0000 (0:00:00.514) 0:08:24.038 ********* 2026-03-24 03:16:12.893761 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:12.893769 | orchestrator | 2026-03-24 03:16:12.893776 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-24 03:16:12.893783 | orchestrator | 2026-03-24 03:16:12.893790 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-24 03:16:12.893796 | orchestrator | Tuesday 24 March 2026 03:16:12 +0000 (0:00:00.801) 0:08:24.840 ********* 2026-03-24 03:16:12.893803 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:12.893810 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:12.893817 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:12.893824 | orchestrator | 2026-03-24 03:16:12.893830 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:16:12.893838 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:16:12.893847 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-24 03:16:12.893854 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-24 03:16:12.893861 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-24 03:16:12.893868 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-24 03:16:12.893875 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-24 03:16:12.893881 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-24 03:16:12.893888 | orchestrator | 2026-03-24 03:16:12.893895 | orchestrator | 2026-03-24 03:16:12.893902 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:16:12.893908 | orchestrator | Tuesday 24 March 2026 03:16:12 +0000 (0:00:00.436) 0:08:25.276 ********* 2026-03-24 03:16:12.893915 | orchestrator | =============================================================================== 2026-03-24 03:16:12.893922 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.08s 2026-03-24 03:16:12.893928 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.91s 2026-03-24 03:16:12.893935 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.40s 2026-03-24 03:16:12.893986 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.54s 2026-03-24 03:16:12.893994 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.62s 2026-03-24 03:16:12.894001 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.89s 2026-03-24 03:16:12.894007 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.22s 2026-03-24 03:16:12.894065 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.13s 2026-03-24 03:16:12.894074 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.05s 2026-03-24 03:16:12.894081 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.89s 2026-03-24 03:16:12.894088 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.84s 2026-03-24 03:16:12.894094 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.27s 2026-03-24 03:16:12.894101 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.04s 2026-03-24 03:16:12.894108 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.72s 2026-03-24 03:16:12.894114 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.77s 2026-03-24 03:16:12.894121 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.11s 2026-03-24 03:16:12.894128 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.92s 2026-03-24 03:16:12.894134 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.85s 2026-03-24 03:16:12.894141 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.78s 2026-03-24 03:16:12.894148 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.59s 2026-03-24 03:16:15.089510 | orchestrator | 2026-03-24 03:16:15 | INFO  | Task ca2b12ac-64ee-4fed-a098-e6a03308300f (horizon) was prepared for execution. 2026-03-24 03:16:15.089583 | orchestrator | 2026-03-24 03:16:15 | INFO  | It takes a moment until task ca2b12ac-64ee-4fed-a098-e6a03308300f (horizon) has been started and output is visible here. 2026-03-24 03:16:20.977707 | orchestrator | 2026-03-24 03:16:20.977808 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:16:20.977849 | orchestrator | 2026-03-24 03:16:20.977872 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:16:20.977887 | orchestrator | Tuesday 24 March 2026 03:16:18 +0000 (0:00:00.188) 0:00:00.188 ********* 2026-03-24 03:16:20.977901 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:20.977916 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:20.977930 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:20.977943 | orchestrator | 2026-03-24 03:16:20.977958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:16:20.977972 | orchestrator | Tuesday 24 March 2026 03:16:18 +0000 (0:00:00.229) 0:00:00.417 ********* 2026-03-24 03:16:20.977987 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-24 03:16:20.978002 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-24 03:16:20.978078 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-24 03:16:20.978097 | orchestrator | 2026-03-24 03:16:20.978113 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-24 03:16:20.978127 | orchestrator | 2026-03-24 03:16:20.978139 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-24 03:16:20.978153 | orchestrator | Tuesday 24 March 2026 03:16:19 +0000 (0:00:00.328) 0:00:00.746 ********* 2026-03-24 03:16:20.978169 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:16:20.978185 | orchestrator | 2026-03-24 03:16:20.978199 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-24 03:16:20.978214 | orchestrator | Tuesday 24 March 2026 03:16:19 +0000 (0:00:00.494) 0:00:01.241 ********* 2026-03-24 03:16:20.978288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:16:20.978360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:16:20.978406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:16:20.978425 | orchestrator | 2026-03-24 03:16:20.978441 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-24 03:16:20.978457 | orchestrator | Tuesday 24 March 2026 03:16:20 +0000 (0:00:01.087) 0:00:02.329 ********* 2026-03-24 03:16:20.978473 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:20.978489 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:20.978504 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:20.978519 | orchestrator | 2026-03-24 03:16:20.978535 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-24 03:16:20.978551 | orchestrator | Tuesday 24 March 2026 03:16:20 +0000 (0:00:00.294) 0:00:02.624 ********* 2026-03-24 03:16:20.978575 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-24 03:16:25.865789 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-24 03:16:25.865865 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-24 03:16:25.865871 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-24 03:16:25.865875 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-24 03:16:25.865880 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-24 03:16:25.865884 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-24 03:16:25.865888 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-24 03:16:25.865908 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-24 03:16:25.865911 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-24 03:16:25.865915 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-24 03:16:25.865919 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-24 03:16:25.865923 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-24 03:16:25.865927 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-24 03:16:25.865930 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-24 03:16:25.865934 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-24 03:16:25.865938 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-24 03:16:25.865942 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-24 03:16:25.865945 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-24 03:16:25.865949 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-24 03:16:25.865953 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-24 03:16:25.865957 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-24 03:16:25.865960 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-24 03:16:25.865964 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-24 03:16:25.865969 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-24 03:16:25.865974 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-24 03:16:25.865978 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-24 03:16:25.865982 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-24 03:16:25.865997 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-24 03:16:25.866001 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-24 03:16:25.866005 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-24 03:16:25.866008 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-24 03:16:25.866041 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-24 03:16:25.866047 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-24 03:16:25.866051 | orchestrator | 2026-03-24 03:16:25.866056 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:25.866060 | orchestrator | Tuesday 24 March 2026 03:16:21 +0000 (0:00:00.581) 0:00:03.205 ********* 2026-03-24 03:16:25.866064 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:25.866073 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:25.866076 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:25.866080 | orchestrator | 2026-03-24 03:16:25.866084 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:25.866088 | orchestrator | Tuesday 24 March 2026 03:16:21 +0000 (0:00:00.272) 0:00:03.478 ********* 2026-03-24 03:16:25.866092 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866096 | orchestrator | 2026-03-24 03:16:25.866114 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:25.866119 | orchestrator | Tuesday 24 March 2026 03:16:21 +0000 (0:00:00.202) 0:00:03.681 ********* 2026-03-24 03:16:25.866122 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866126 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:25.866130 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:25.866134 | orchestrator | 2026-03-24 03:16:25.866137 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:25.866141 | orchestrator | Tuesday 24 March 2026 03:16:22 +0000 (0:00:00.249) 0:00:03.930 ********* 2026-03-24 03:16:25.866145 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:25.866149 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:25.866152 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:25.866156 | orchestrator | 2026-03-24 03:16:25.866160 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:25.866164 | orchestrator | Tuesday 24 March 2026 03:16:22 +0000 (0:00:00.274) 0:00:04.204 ********* 2026-03-24 03:16:25.866168 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866171 | orchestrator | 2026-03-24 03:16:25.866175 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:25.866179 | orchestrator | Tuesday 24 March 2026 03:16:22 +0000 (0:00:00.120) 0:00:04.325 ********* 2026-03-24 03:16:25.866183 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866187 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:25.866191 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:25.866194 | orchestrator | 2026-03-24 03:16:25.866198 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:25.866202 | orchestrator | Tuesday 24 March 2026 03:16:22 +0000 (0:00:00.246) 0:00:04.572 ********* 2026-03-24 03:16:25.866206 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:25.866210 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:25.866213 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:25.866217 | orchestrator | 2026-03-24 03:16:25.866221 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:25.866225 | orchestrator | Tuesday 24 March 2026 03:16:23 +0000 (0:00:00.386) 0:00:04.959 ********* 2026-03-24 03:16:25.866229 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866232 | orchestrator | 2026-03-24 03:16:25.866236 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:25.866240 | orchestrator | Tuesday 24 March 2026 03:16:23 +0000 (0:00:00.100) 0:00:05.059 ********* 2026-03-24 03:16:25.866244 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866247 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:25.866251 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:25.866255 | orchestrator | 2026-03-24 03:16:25.866259 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:25.866262 | orchestrator | Tuesday 24 March 2026 03:16:23 +0000 (0:00:00.263) 0:00:05.323 ********* 2026-03-24 03:16:25.866266 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:25.866270 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:25.866274 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:25.866277 | orchestrator | 2026-03-24 03:16:25.866281 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:25.866285 | orchestrator | Tuesday 24 March 2026 03:16:23 +0000 (0:00:00.269) 0:00:05.592 ********* 2026-03-24 03:16:25.866289 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866293 | orchestrator | 2026-03-24 03:16:25.866301 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:25.866304 | orchestrator | Tuesday 24 March 2026 03:16:23 +0000 (0:00:00.118) 0:00:05.710 ********* 2026-03-24 03:16:25.866308 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866312 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:25.866316 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:25.866319 | orchestrator | 2026-03-24 03:16:25.866323 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:25.866327 | orchestrator | Tuesday 24 March 2026 03:16:24 +0000 (0:00:00.356) 0:00:06.067 ********* 2026-03-24 03:16:25.866331 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:25.866381 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:25.866393 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:25.866400 | orchestrator | 2026-03-24 03:16:25.866406 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:25.866413 | orchestrator | Tuesday 24 March 2026 03:16:24 +0000 (0:00:00.279) 0:00:06.347 ********* 2026-03-24 03:16:25.866420 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866426 | orchestrator | 2026-03-24 03:16:25.866430 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:25.866435 | orchestrator | Tuesday 24 March 2026 03:16:24 +0000 (0:00:00.101) 0:00:06.448 ********* 2026-03-24 03:16:25.866439 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866443 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:25.866448 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:25.866452 | orchestrator | 2026-03-24 03:16:25.866456 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:25.866461 | orchestrator | Tuesday 24 March 2026 03:16:24 +0000 (0:00:00.252) 0:00:06.700 ********* 2026-03-24 03:16:25.866465 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:25.866469 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:25.866474 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:25.866478 | orchestrator | 2026-03-24 03:16:25.866482 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:25.866486 | orchestrator | Tuesday 24 March 2026 03:16:25 +0000 (0:00:00.299) 0:00:07.000 ********* 2026-03-24 03:16:25.866491 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866495 | orchestrator | 2026-03-24 03:16:25.866499 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:25.866503 | orchestrator | Tuesday 24 March 2026 03:16:25 +0000 (0:00:00.290) 0:00:07.291 ********* 2026-03-24 03:16:25.866508 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:25.866512 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:25.866516 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:25.866521 | orchestrator | 2026-03-24 03:16:25.866525 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:25.866534 | orchestrator | Tuesday 24 March 2026 03:16:25 +0000 (0:00:00.288) 0:00:07.579 ********* 2026-03-24 03:16:39.119462 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:39.119579 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:39.119593 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:39.119603 | orchestrator | 2026-03-24 03:16:39.119613 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:39.119624 | orchestrator | Tuesday 24 March 2026 03:16:26 +0000 (0:00:00.295) 0:00:07.875 ********* 2026-03-24 03:16:39.119633 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.119642 | orchestrator | 2026-03-24 03:16:39.119652 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:39.119661 | orchestrator | Tuesday 24 March 2026 03:16:26 +0000 (0:00:00.144) 0:00:08.020 ********* 2026-03-24 03:16:39.119669 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.119677 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:39.119686 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:39.119695 | orchestrator | 2026-03-24 03:16:39.119703 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:39.119735 | orchestrator | Tuesday 24 March 2026 03:16:26 +0000 (0:00:00.289) 0:00:08.309 ********* 2026-03-24 03:16:39.119744 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:39.119752 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:39.119762 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:39.119770 | orchestrator | 2026-03-24 03:16:39.119778 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:39.119786 | orchestrator | Tuesday 24 March 2026 03:16:27 +0000 (0:00:00.471) 0:00:08.780 ********* 2026-03-24 03:16:39.119795 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.119803 | orchestrator | 2026-03-24 03:16:39.119811 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:39.119820 | orchestrator | Tuesday 24 March 2026 03:16:27 +0000 (0:00:00.126) 0:00:08.906 ********* 2026-03-24 03:16:39.119828 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.119836 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:39.119845 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:39.119853 | orchestrator | 2026-03-24 03:16:39.119861 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:39.119870 | orchestrator | Tuesday 24 March 2026 03:16:27 +0000 (0:00:00.283) 0:00:09.190 ********* 2026-03-24 03:16:39.119879 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:39.119888 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:39.119898 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:39.119907 | orchestrator | 2026-03-24 03:16:39.119917 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:39.119927 | orchestrator | Tuesday 24 March 2026 03:16:27 +0000 (0:00:00.331) 0:00:09.521 ********* 2026-03-24 03:16:39.119937 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.119946 | orchestrator | 2026-03-24 03:16:39.119955 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:39.119964 | orchestrator | Tuesday 24 March 2026 03:16:27 +0000 (0:00:00.130) 0:00:09.652 ********* 2026-03-24 03:16:39.119973 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.119981 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:39.119990 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:39.119998 | orchestrator | 2026-03-24 03:16:39.120007 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-24 03:16:39.120017 | orchestrator | Tuesday 24 March 2026 03:16:28 +0000 (0:00:00.458) 0:00:10.111 ********* 2026-03-24 03:16:39.120025 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:16:39.120034 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:16:39.120043 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:16:39.120052 | orchestrator | 2026-03-24 03:16:39.120060 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-24 03:16:39.120070 | orchestrator | Tuesday 24 March 2026 03:16:28 +0000 (0:00:00.316) 0:00:10.427 ********* 2026-03-24 03:16:39.120078 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.120087 | orchestrator | 2026-03-24 03:16:39.120096 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-24 03:16:39.120105 | orchestrator | Tuesday 24 March 2026 03:16:28 +0000 (0:00:00.137) 0:00:10.565 ********* 2026-03-24 03:16:39.120129 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.120139 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:39.120148 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:39.120157 | orchestrator | 2026-03-24 03:16:39.120167 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-24 03:16:39.120176 | orchestrator | Tuesday 24 March 2026 03:16:29 +0000 (0:00:00.289) 0:00:10.854 ********* 2026-03-24 03:16:39.120185 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:16:39.120194 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:16:39.120203 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:16:39.120213 | orchestrator | 2026-03-24 03:16:39.120221 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-24 03:16:39.120237 | orchestrator | Tuesday 24 March 2026 03:16:30 +0000 (0:00:01.754) 0:00:12.609 ********* 2026-03-24 03:16:39.120247 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-24 03:16:39.120257 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-24 03:16:39.120265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-24 03:16:39.120273 | orchestrator | 2026-03-24 03:16:39.120281 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-24 03:16:39.120289 | orchestrator | Tuesday 24 March 2026 03:16:32 +0000 (0:00:01.828) 0:00:14.437 ********* 2026-03-24 03:16:39.120297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-24 03:16:39.120307 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-24 03:16:39.120315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-24 03:16:39.120323 | orchestrator | 2026-03-24 03:16:39.120352 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-24 03:16:39.120380 | orchestrator | Tuesday 24 March 2026 03:16:34 +0000 (0:00:01.790) 0:00:16.228 ********* 2026-03-24 03:16:39.120389 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-24 03:16:39.120397 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-24 03:16:39.120406 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-24 03:16:39.120414 | orchestrator | 2026-03-24 03:16:39.120423 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-24 03:16:39.120431 | orchestrator | Tuesday 24 March 2026 03:16:36 +0000 (0:00:01.505) 0:00:17.734 ********* 2026-03-24 03:16:39.120439 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.120448 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:39.120456 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:39.120464 | orchestrator | 2026-03-24 03:16:39.120472 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-24 03:16:39.120480 | orchestrator | Tuesday 24 March 2026 03:16:36 +0000 (0:00:00.443) 0:00:18.177 ********* 2026-03-24 03:16:39.120488 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.120496 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:39.120504 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:39.120512 | orchestrator | 2026-03-24 03:16:39.120520 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-24 03:16:39.120528 | orchestrator | Tuesday 24 March 2026 03:16:36 +0000 (0:00:00.268) 0:00:18.446 ********* 2026-03-24 03:16:39.120537 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:16:39.120545 | orchestrator | 2026-03-24 03:16:39.120553 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-24 03:16:39.120561 | orchestrator | Tuesday 24 March 2026 03:16:37 +0000 (0:00:00.565) 0:00:19.012 ********* 2026-03-24 03:16:39.120583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:16:39.120611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:16:39.719151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:16:39.719261 | orchestrator | 2026-03-24 03:16:39.719274 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-24 03:16:39.719282 | orchestrator | Tuesday 24 March 2026 03:16:39 +0000 (0:00:01.812) 0:00:20.824 ********* 2026-03-24 03:16:39.719306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 03:16:39.719320 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:39.719381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 03:16:39.719390 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:39.719404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 03:16:42.094742 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:42.094839 | orchestrator | 2026-03-24 03:16:42.094851 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-24 03:16:42.094860 | orchestrator | Tuesday 24 March 2026 03:16:39 +0000 (0:00:00.606) 0:00:21.430 ********* 2026-03-24 03:16:42.094885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 03:16:42.094895 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:16:42.094916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 03:16:42.094943 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:16:42.094980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 03:16:42.094987 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:16:42.094994 | orchestrator | 2026-03-24 03:16:42.095000 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-24 03:16:42.095006 | orchestrator | Tuesday 24 March 2026 03:16:40 +0000 (0:00:00.809) 0:00:22.240 ********* 2026-03-24 03:16:42.095034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:17:23.397791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:17:23.397942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 03:17:23.397958 | orchestrator | 2026-03-24 03:17:23.397967 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-24 03:17:23.397975 | orchestrator | Tuesday 24 March 2026 03:16:42 +0000 (0:00:01.565) 0:00:23.806 ********* 2026-03-24 03:17:23.397982 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:17:23.397990 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:17:23.397996 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:17:23.398002 | orchestrator | 2026-03-24 03:17:23.398008 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-24 03:17:23.398065 | orchestrator | Tuesday 24 March 2026 03:16:42 +0000 (0:00:00.293) 0:00:24.100 ********* 2026-03-24 03:17:23.398076 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:17:23.398083 | orchestrator | 2026-03-24 03:17:23.398090 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-24 03:17:23.398097 | orchestrator | Tuesday 24 March 2026 03:16:42 +0000 (0:00:00.494) 0:00:24.594 ********* 2026-03-24 03:17:23.398103 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:17:23.398110 | orchestrator | 2026-03-24 03:17:23.398117 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-24 03:17:23.398123 | orchestrator | Tuesday 24 March 2026 03:16:45 +0000 (0:00:02.209) 0:00:26.803 ********* 2026-03-24 03:17:23.398130 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:17:23.398136 | orchestrator | 2026-03-24 03:17:23.398143 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-24 03:17:23.398150 | orchestrator | Tuesday 24 March 2026 03:16:47 +0000 (0:00:02.630) 0:00:29.434 ********* 2026-03-24 03:17:23.398156 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:17:23.398163 | orchestrator | 2026-03-24 03:17:23.398177 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-24 03:17:23.398184 | orchestrator | Tuesday 24 March 2026 03:17:04 +0000 (0:00:16.616) 0:00:46.051 ********* 2026-03-24 03:17:23.398190 | orchestrator | 2026-03-24 03:17:23.398196 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-24 03:17:23.398204 | orchestrator | Tuesday 24 March 2026 03:17:04 +0000 (0:00:00.083) 0:00:46.135 ********* 2026-03-24 03:17:23.398210 | orchestrator | 2026-03-24 03:17:23.398216 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-24 03:17:23.398222 | orchestrator | Tuesday 24 March 2026 03:17:04 +0000 (0:00:00.063) 0:00:46.198 ********* 2026-03-24 03:17:23.398229 | orchestrator | 2026-03-24 03:17:23.398235 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-24 03:17:23.398241 | orchestrator | Tuesday 24 March 2026 03:17:04 +0000 (0:00:00.068) 0:00:46.266 ********* 2026-03-24 03:17:23.398248 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:17:23.398254 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:17:23.398260 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:17:23.398266 | orchestrator | 2026-03-24 03:17:23.398272 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:17:23.398279 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-24 03:17:23.398287 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-24 03:17:23.398293 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-24 03:17:23.398299 | orchestrator | 2026-03-24 03:17:23.398343 | orchestrator | 2026-03-24 03:17:23.398351 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:17:23.398358 | orchestrator | Tuesday 24 March 2026 03:17:23 +0000 (0:00:18.819) 0:01:05.086 ********* 2026-03-24 03:17:23.398365 | orchestrator | =============================================================================== 2026-03-24 03:17:23.398371 | orchestrator | horizon : Restart horizon container ------------------------------------ 18.82s 2026-03-24 03:17:23.398378 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.62s 2026-03-24 03:17:23.398386 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.63s 2026-03-24 03:17:23.398393 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.21s 2026-03-24 03:17:23.398400 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.83s 2026-03-24 03:17:23.398414 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.81s 2026-03-24 03:17:23.398421 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.79s 2026-03-24 03:17:23.398428 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.75s 2026-03-24 03:17:23.398435 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.57s 2026-03-24 03:17:23.398442 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2026-03-24 03:17:23.398449 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2026-03-24 03:17:23.398456 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2026-03-24 03:17:23.398463 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2026-03-24 03:17:23.398481 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-03-24 03:17:23.702415 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-03-24 03:17:23.702508 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2026-03-24 03:17:23.702518 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2026-03-24 03:17:23.702549 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2026-03-24 03:17:23.702557 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.46s 2026-03-24 03:17:23.702564 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.44s 2026-03-24 03:17:25.896002 | orchestrator | 2026-03-24 03:17:25 | INFO  | Task 99e2b55c-4bd6-4ff5-82d3-63ba07a088c9 (skyline) was prepared for execution. 2026-03-24 03:17:25.896068 | orchestrator | 2026-03-24 03:17:25 | INFO  | It takes a moment until task 99e2b55c-4bd6-4ff5-82d3-63ba07a088c9 (skyline) has been started and output is visible here. 2026-03-24 03:17:57.111135 | orchestrator | 2026-03-24 03:17:57.111224 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:17:57.111231 | orchestrator | 2026-03-24 03:17:57.111236 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:17:57.111241 | orchestrator | Tuesday 24 March 2026 03:17:29 +0000 (0:00:00.249) 0:00:00.249 ********* 2026-03-24 03:17:57.111245 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:17:57.111250 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:17:57.111255 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:17:57.111258 | orchestrator | 2026-03-24 03:17:57.111265 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:17:57.111271 | orchestrator | Tuesday 24 March 2026 03:17:30 +0000 (0:00:00.288) 0:00:00.537 ********* 2026-03-24 03:17:57.111276 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-24 03:17:57.111315 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-24 03:17:57.111320 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-24 03:17:57.111323 | orchestrator | 2026-03-24 03:17:57.111328 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-24 03:17:57.111331 | orchestrator | 2026-03-24 03:17:57.111335 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-24 03:17:57.111340 | orchestrator | Tuesday 24 March 2026 03:17:30 +0000 (0:00:00.419) 0:00:00.957 ********* 2026-03-24 03:17:57.111345 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:17:57.111349 | orchestrator | 2026-03-24 03:17:57.111353 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-24 03:17:57.111357 | orchestrator | Tuesday 24 March 2026 03:17:31 +0000 (0:00:00.509) 0:00:01.466 ********* 2026-03-24 03:17:57.111361 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-24 03:17:57.111364 | orchestrator | 2026-03-24 03:17:57.111368 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-24 03:17:57.111372 | orchestrator | Tuesday 24 March 2026 03:17:34 +0000 (0:00:03.407) 0:00:04.874 ********* 2026-03-24 03:17:57.111376 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-24 03:17:57.111380 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-24 03:17:57.111384 | orchestrator | 2026-03-24 03:17:57.111387 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-24 03:17:57.111391 | orchestrator | Tuesday 24 March 2026 03:17:41 +0000 (0:00:06.736) 0:00:11.610 ********* 2026-03-24 03:17:57.111395 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:17:57.111399 | orchestrator | 2026-03-24 03:17:57.111403 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-24 03:17:57.111407 | orchestrator | Tuesday 24 March 2026 03:17:44 +0000 (0:00:03.464) 0:00:15.075 ********* 2026-03-24 03:17:57.111411 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:17:57.111415 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-24 03:17:57.111419 | orchestrator | 2026-03-24 03:17:57.111423 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-24 03:17:57.111446 | orchestrator | Tuesday 24 March 2026 03:17:48 +0000 (0:00:04.015) 0:00:19.090 ********* 2026-03-24 03:17:57.111450 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:17:57.111454 | orchestrator | 2026-03-24 03:17:57.111458 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-24 03:17:57.111462 | orchestrator | Tuesday 24 March 2026 03:17:51 +0000 (0:00:03.231) 0:00:22.322 ********* 2026-03-24 03:17:57.111466 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-24 03:17:57.111470 | orchestrator | 2026-03-24 03:17:57.111483 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-24 03:17:57.111487 | orchestrator | Tuesday 24 March 2026 03:17:55 +0000 (0:00:03.837) 0:00:26.160 ********* 2026-03-24 03:17:57.111495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:17:57.111512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:17:57.111517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:17:57.111522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:17:57.111534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:17:57.111542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:00.716959 | orchestrator | 2026-03-24 03:18:00.717070 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-24 03:18:00.717096 | orchestrator | Tuesday 24 March 2026 03:17:57 +0000 (0:00:01.266) 0:00:27.426 ********* 2026-03-24 03:18:00.717115 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:18:00.717135 | orchestrator | 2026-03-24 03:18:00.717154 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-24 03:18:00.717172 | orchestrator | Tuesday 24 March 2026 03:17:57 +0000 (0:00:00.649) 0:00:28.076 ********* 2026-03-24 03:18:00.717194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:00.717255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:00.717328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:00.717365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:00.717379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:00.717391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:00.717412 | orchestrator | 2026-03-24 03:18:00.717424 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-24 03:18:00.717435 | orchestrator | Tuesday 24 March 2026 03:18:00 +0000 (0:00:02.371) 0:00:30.448 ********* 2026-03-24 03:18:00.717462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 03:18:00.717484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 03:18:00.717546 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:18:00.717583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.880919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881049 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:18:01.881082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881109 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:18:01.881120 | orchestrator | 2026-03-24 03:18:01.881133 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-24 03:18:01.881146 | orchestrator | Tuesday 24 March 2026 03:18:00 +0000 (0:00:00.592) 0:00:31.040 ********* 2026-03-24 03:18:01.881157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881207 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:18:01.881225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881248 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:18:01.881260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-24 03:18:01.881405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-24 03:18:10.275494 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:18:10.275577 | orchestrator | 2026-03-24 03:18:10.275584 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-24 03:18:10.275590 | orchestrator | Tuesday 24 March 2026 03:18:01 +0000 (0:00:01.156) 0:00:32.197 ********* 2026-03-24 03:18:10.275608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:10.275615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:10.275620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:10.275639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:10.275654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:10.275662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:10.275666 | orchestrator | 2026-03-24 03:18:10.275670 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-24 03:18:10.275674 | orchestrator | Tuesday 24 March 2026 03:18:04 +0000 (0:00:02.413) 0:00:34.611 ********* 2026-03-24 03:18:10.275679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-24 03:18:10.275683 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-24 03:18:10.275686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-24 03:18:10.275690 | orchestrator | 2026-03-24 03:18:10.275694 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-24 03:18:10.275698 | orchestrator | Tuesday 24 March 2026 03:18:05 +0000 (0:00:01.528) 0:00:36.139 ********* 2026-03-24 03:18:10.275701 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-24 03:18:10.275705 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-24 03:18:10.275714 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-24 03:18:10.275718 | orchestrator | 2026-03-24 03:18:10.275722 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-24 03:18:10.275726 | orchestrator | Tuesday 24 March 2026 03:18:07 +0000 (0:00:02.169) 0:00:38.309 ********* 2026-03-24 03:18:10.275730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:10.275740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312773 | orchestrator | 2026-03-24 03:18:12.312784 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-24 03:18:12.312796 | orchestrator | Tuesday 24 March 2026 03:18:10 +0000 (0:00:02.289) 0:00:40.598 ********* 2026-03-24 03:18:12.312805 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:18:12.312816 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:18:12.312825 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:18:12.312834 | orchestrator | 2026-03-24 03:18:12.312860 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-24 03:18:12.312869 | orchestrator | Tuesday 24 March 2026 03:18:10 +0000 (0:00:00.294) 0:00:40.892 ********* 2026-03-24 03:18:12.312906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:12.312980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:41.550314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-24 03:18:41.550499 | orchestrator | 2026-03-24 03:18:41.550534 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-24 03:18:41.550557 | orchestrator | Tuesday 24 March 2026 03:18:12 +0000 (0:00:01.741) 0:00:42.634 ********* 2026-03-24 03:18:41.550576 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:18:41.550596 | orchestrator | 2026-03-24 03:18:41.550614 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-24 03:18:41.550633 | orchestrator | Tuesday 24 March 2026 03:18:14 +0000 (0:00:02.209) 0:00:44.843 ********* 2026-03-24 03:18:41.550651 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:18:41.550668 | orchestrator | 2026-03-24 03:18:41.550688 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-24 03:18:41.550706 | orchestrator | Tuesday 24 March 2026 03:18:16 +0000 (0:00:02.237) 0:00:47.080 ********* 2026-03-24 03:18:41.550724 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:18:41.550742 | orchestrator | 2026-03-24 03:18:41.550761 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-24 03:18:41.550781 | orchestrator | Tuesday 24 March 2026 03:18:24 +0000 (0:00:07.550) 0:00:54.631 ********* 2026-03-24 03:18:41.550802 | orchestrator | 2026-03-24 03:18:41.550822 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-24 03:18:41.550841 | orchestrator | Tuesday 24 March 2026 03:18:24 +0000 (0:00:00.068) 0:00:54.699 ********* 2026-03-24 03:18:41.550861 | orchestrator | 2026-03-24 03:18:41.550880 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-24 03:18:41.550900 | orchestrator | Tuesday 24 March 2026 03:18:24 +0000 (0:00:00.066) 0:00:54.765 ********* 2026-03-24 03:18:41.550919 | orchestrator | 2026-03-24 03:18:41.550938 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-24 03:18:41.550957 | orchestrator | Tuesday 24 March 2026 03:18:24 +0000 (0:00:00.068) 0:00:54.834 ********* 2026-03-24 03:18:41.550976 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:18:41.550996 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:18:41.551015 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:18:41.551033 | orchestrator | 2026-03-24 03:18:41.551052 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-24 03:18:41.551071 | orchestrator | Tuesday 24 March 2026 03:18:32 +0000 (0:00:07.975) 0:01:02.809 ********* 2026-03-24 03:18:41.551089 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:18:41.551108 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:18:41.551128 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:18:41.551147 | orchestrator | 2026-03-24 03:18:41.551166 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:18:41.551186 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 03:18:41.551207 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 03:18:41.551226 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 03:18:41.551245 | orchestrator | 2026-03-24 03:18:41.551301 | orchestrator | 2026-03-24 03:18:41.551335 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:18:41.551355 | orchestrator | Tuesday 24 March 2026 03:18:41 +0000 (0:00:08.778) 0:01:11.588 ********* 2026-03-24 03:18:41.551373 | orchestrator | =============================================================================== 2026-03-24 03:18:41.551408 | orchestrator | skyline : Restart skyline-console container ----------------------------- 8.78s 2026-03-24 03:18:41.551428 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 7.98s 2026-03-24 03:18:41.551447 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.55s 2026-03-24 03:18:41.551464 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.74s 2026-03-24 03:18:41.551503 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.02s 2026-03-24 03:18:41.551522 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.84s 2026-03-24 03:18:41.551542 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.46s 2026-03-24 03:18:41.551561 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.41s 2026-03-24 03:18:41.551608 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.23s 2026-03-24 03:18:41.551628 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.41s 2026-03-24 03:18:41.551647 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.37s 2026-03-24 03:18:41.551665 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.29s 2026-03-24 03:18:41.551684 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.24s 2026-03-24 03:18:41.551702 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.21s 2026-03-24 03:18:41.551720 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.17s 2026-03-24 03:18:41.551738 | orchestrator | skyline : Check skyline container --------------------------------------- 1.74s 2026-03-24 03:18:41.551756 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.53s 2026-03-24 03:18:41.551774 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.27s 2026-03-24 03:18:41.551791 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.16s 2026-03-24 03:18:41.551809 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.65s 2026-03-24 03:18:43.762718 | orchestrator | 2026-03-24 03:18:43 | INFO  | Task e371c2f3-393f-43f5-b193-6925716880db (glance) was prepared for execution. 2026-03-24 03:18:43.762819 | orchestrator | 2026-03-24 03:18:43 | INFO  | It takes a moment until task e371c2f3-393f-43f5-b193-6925716880db (glance) has been started and output is visible here. 2026-03-24 03:19:17.093437 | orchestrator | 2026-03-24 03:19:17.093555 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:19:17.093572 | orchestrator | 2026-03-24 03:19:17.093583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:19:17.093594 | orchestrator | Tuesday 24 March 2026 03:18:47 +0000 (0:00:00.186) 0:00:00.186 ********* 2026-03-24 03:19:17.093604 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:19:17.093615 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:19:17.093623 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:19:17.093632 | orchestrator | 2026-03-24 03:19:17.093642 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:19:17.093651 | orchestrator | Tuesday 24 March 2026 03:18:47 +0000 (0:00:00.214) 0:00:00.400 ********* 2026-03-24 03:19:17.093660 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-24 03:19:17.093671 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-24 03:19:17.093680 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-24 03:19:17.093689 | orchestrator | 2026-03-24 03:19:17.093697 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-24 03:19:17.093705 | orchestrator | 2026-03-24 03:19:17.093714 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-24 03:19:17.093722 | orchestrator | Tuesday 24 March 2026 03:18:48 +0000 (0:00:00.305) 0:00:00.705 ********* 2026-03-24 03:19:17.093753 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:19:17.093763 | orchestrator | 2026-03-24 03:19:17.093773 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-24 03:19:17.093782 | orchestrator | Tuesday 24 March 2026 03:18:48 +0000 (0:00:00.464) 0:00:01.170 ********* 2026-03-24 03:19:17.093791 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-24 03:19:17.093800 | orchestrator | 2026-03-24 03:19:17.093810 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-24 03:19:17.093820 | orchestrator | Tuesday 24 March 2026 03:18:51 +0000 (0:00:03.449) 0:00:04.620 ********* 2026-03-24 03:19:17.093828 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-24 03:19:17.093838 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-24 03:19:17.093847 | orchestrator | 2026-03-24 03:19:17.093857 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-24 03:19:17.093866 | orchestrator | Tuesday 24 March 2026 03:18:58 +0000 (0:00:06.637) 0:00:11.257 ********* 2026-03-24 03:19:17.093876 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:19:17.093887 | orchestrator | 2026-03-24 03:19:17.093897 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-24 03:19:17.093907 | orchestrator | Tuesday 24 March 2026 03:19:01 +0000 (0:00:03.332) 0:00:14.590 ********* 2026-03-24 03:19:17.093917 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:19:17.093927 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-24 03:19:17.093936 | orchestrator | 2026-03-24 03:19:17.093944 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-24 03:19:17.093953 | orchestrator | Tuesday 24 March 2026 03:19:06 +0000 (0:00:04.147) 0:00:18.737 ********* 2026-03-24 03:19:17.093961 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:19:17.093970 | orchestrator | 2026-03-24 03:19:17.093979 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-24 03:19:17.093988 | orchestrator | Tuesday 24 March 2026 03:19:09 +0000 (0:00:03.378) 0:00:22.116 ********* 2026-03-24 03:19:17.094070 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-24 03:19:17.094086 | orchestrator | 2026-03-24 03:19:17.094105 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-24 03:19:17.094116 | orchestrator | Tuesday 24 March 2026 03:19:13 +0000 (0:00:03.828) 0:00:25.945 ********* 2026-03-24 03:19:17.094156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:19:17.094196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:19:17.094215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:19:17.094227 | orchestrator | 2026-03-24 03:19:17.094238 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-24 03:19:17.094339 | orchestrator | Tuesday 24 March 2026 03:19:16 +0000 (0:00:03.189) 0:00:29.135 ********* 2026-03-24 03:19:17.094352 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:19:17.094370 | orchestrator | 2026-03-24 03:19:17.094390 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-24 03:19:31.465700 | orchestrator | Tuesday 24 March 2026 03:19:17 +0000 (0:00:00.627) 0:00:29.762 ********* 2026-03-24 03:19:31.465817 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:19:31.465835 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:19:31.465848 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:19:31.465859 | orchestrator | 2026-03-24 03:19:31.465872 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-24 03:19:31.465883 | orchestrator | Tuesday 24 March 2026 03:19:20 +0000 (0:00:03.321) 0:00:33.083 ********* 2026-03-24 03:19:31.465895 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:19:31.465907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:19:31.465919 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:19:31.465930 | orchestrator | 2026-03-24 03:19:31.465941 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-24 03:19:31.465952 | orchestrator | Tuesday 24 March 2026 03:19:21 +0000 (0:00:01.511) 0:00:34.595 ********* 2026-03-24 03:19:31.465963 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:19:31.465974 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:19:31.465986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:19:31.465996 | orchestrator | 2026-03-24 03:19:31.466007 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-24 03:19:31.466079 | orchestrator | Tuesday 24 March 2026 03:19:23 +0000 (0:00:01.272) 0:00:35.868 ********* 2026-03-24 03:19:31.466091 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:19:31.466103 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:19:31.466114 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:19:31.466129 | orchestrator | 2026-03-24 03:19:31.466149 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-24 03:19:31.466169 | orchestrator | Tuesday 24 March 2026 03:19:23 +0000 (0:00:00.694) 0:00:36.562 ********* 2026-03-24 03:19:31.466188 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:19:31.466207 | orchestrator | 2026-03-24 03:19:31.466227 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-24 03:19:31.466274 | orchestrator | Tuesday 24 March 2026 03:19:24 +0000 (0:00:00.130) 0:00:36.693 ********* 2026-03-24 03:19:31.466293 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:19:31.466311 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:19:31.466331 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:19:31.466351 | orchestrator | 2026-03-24 03:19:31.466371 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-24 03:19:31.466389 | orchestrator | Tuesday 24 March 2026 03:19:24 +0000 (0:00:00.277) 0:00:36.971 ********* 2026-03-24 03:19:31.466410 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:19:31.466430 | orchestrator | 2026-03-24 03:19:31.466451 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-24 03:19:31.466471 | orchestrator | Tuesday 24 March 2026 03:19:24 +0000 (0:00:00.685) 0:00:37.657 ********* 2026-03-24 03:19:31.466510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:19:31.466575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:19:31.466597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:19:31.466619 | orchestrator | 2026-03-24 03:19:31.466631 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-24 03:19:31.466642 | orchestrator | Tuesday 24 March 2026 03:19:28 +0000 (0:00:03.580) 0:00:41.237 ********* 2026-03-24 03:19:31.466663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 03:19:34.750320 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:19:34.750439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 03:19:34.750472 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:19:34.750478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 03:19:34.750483 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:19:34.750487 | orchestrator | 2026-03-24 03:19:34.750493 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-24 03:19:34.750498 | orchestrator | Tuesday 24 March 2026 03:19:31 +0000 (0:00:02.900) 0:00:44.137 ********* 2026-03-24 03:19:34.750517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 03:19:34.750527 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:19:34.750536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 03:19:34.750541 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:19:34.750550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 03:20:03.033924 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034095 | orchestrator | 2026-03-24 03:20:03.034116 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-24 03:20:03.034128 | orchestrator | Tuesday 24 March 2026 03:19:34 +0000 (0:00:03.282) 0:00:47.420 ********* 2026-03-24 03:20:03.034134 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:20:03.034166 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034173 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:20:03.034178 | orchestrator | 2026-03-24 03:20:03.034185 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-24 03:20:03.034191 | orchestrator | Tuesday 24 March 2026 03:19:37 +0000 (0:00:02.829) 0:00:50.249 ********* 2026-03-24 03:20:03.034211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:20:03.034266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:20:03.034295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:20:03.034308 | orchestrator | 2026-03-24 03:20:03.034314 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-24 03:20:03.034320 | orchestrator | Tuesday 24 March 2026 03:19:40 +0000 (0:00:03.246) 0:00:53.496 ********* 2026-03-24 03:20:03.034325 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:20:03.034331 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:20:03.034336 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:20:03.034342 | orchestrator | 2026-03-24 03:20:03.034347 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-24 03:20:03.034353 | orchestrator | Tuesday 24 March 2026 03:19:45 +0000 (0:00:04.685) 0:00:58.182 ********* 2026-03-24 03:20:03.034358 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:20:03.034363 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:20:03.034369 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034374 | orchestrator | 2026-03-24 03:20:03.034379 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-24 03:20:03.034385 | orchestrator | Tuesday 24 March 2026 03:19:48 +0000 (0:00:02.813) 0:01:00.996 ********* 2026-03-24 03:20:03.034390 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:20:03.034396 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:20:03.034401 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034406 | orchestrator | 2026-03-24 03:20:03.034412 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-24 03:20:03.034417 | orchestrator | Tuesday 24 March 2026 03:19:51 +0000 (0:00:02.812) 0:01:03.809 ********* 2026-03-24 03:20:03.034422 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:20:03.034428 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:20:03.034433 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034439 | orchestrator | 2026-03-24 03:20:03.034444 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-24 03:20:03.034450 | orchestrator | Tuesday 24 March 2026 03:19:53 +0000 (0:00:02.688) 0:01:06.497 ********* 2026-03-24 03:20:03.034455 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034461 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:20:03.034467 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:20:03.034474 | orchestrator | 2026-03-24 03:20:03.034480 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-24 03:20:03.034486 | orchestrator | Tuesday 24 March 2026 03:19:56 +0000 (0:00:02.630) 0:01:09.128 ********* 2026-03-24 03:20:03.034493 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:20:03.034499 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:20:03.034510 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034517 | orchestrator | 2026-03-24 03:20:03.034523 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-24 03:20:03.034530 | orchestrator | Tuesday 24 March 2026 03:19:56 +0000 (0:00:00.333) 0:01:09.461 ********* 2026-03-24 03:20:03.034538 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-24 03:20:03.034545 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:20:03.034551 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-24 03:20:03.034556 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:20:03.034562 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-24 03:20:03.034567 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:20:03.034572 | orchestrator | 2026-03-24 03:20:03.034578 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-24 03:20:03.034583 | orchestrator | Tuesday 24 March 2026 03:19:59 +0000 (0:00:02.592) 0:01:12.054 ********* 2026-03-24 03:20:03.034589 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:20:03.034594 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:20:03.034600 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:20:03.034605 | orchestrator | 2026-03-24 03:20:03.034610 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-24 03:20:03.034620 | orchestrator | Tuesday 24 March 2026 03:20:03 +0000 (0:00:03.647) 0:01:15.701 ********* 2026-03-24 03:21:08.341559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:21:08.341657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:21:08.341703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 03:21:08.341711 | orchestrator | 2026-03-24 03:21:08.341718 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-24 03:21:08.341724 | orchestrator | Tuesday 24 March 2026 03:20:06 +0000 (0:00:03.114) 0:01:18.815 ********* 2026-03-24 03:21:08.341729 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:21:08.341735 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:21:08.341739 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:21:08.341744 | orchestrator | 2026-03-24 03:21:08.341749 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-24 03:21:08.341754 | orchestrator | Tuesday 24 March 2026 03:20:06 +0000 (0:00:00.352) 0:01:19.168 ********* 2026-03-24 03:21:08.341759 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:21:08.341764 | orchestrator | 2026-03-24 03:21:08.341769 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-24 03:21:08.341774 | orchestrator | Tuesday 24 March 2026 03:20:08 +0000 (0:00:02.169) 0:01:21.337 ********* 2026-03-24 03:21:08.341778 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:21:08.341784 | orchestrator | 2026-03-24 03:21:08.341788 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-24 03:21:08.341793 | orchestrator | Tuesday 24 March 2026 03:20:10 +0000 (0:00:02.297) 0:01:23.635 ********* 2026-03-24 03:21:08.341798 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:21:08.341808 | orchestrator | 2026-03-24 03:21:08.341813 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-24 03:21:08.341818 | orchestrator | Tuesday 24 March 2026 03:20:13 +0000 (0:00:02.193) 0:01:25.828 ********* 2026-03-24 03:21:08.341823 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:21:08.341827 | orchestrator | 2026-03-24 03:21:08.341832 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-24 03:21:08.341837 | orchestrator | Tuesday 24 March 2026 03:20:39 +0000 (0:00:26.834) 0:01:52.662 ********* 2026-03-24 03:21:08.341842 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:21:08.341847 | orchestrator | 2026-03-24 03:21:08.341851 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-24 03:21:08.341856 | orchestrator | Tuesday 24 March 2026 03:20:42 +0000 (0:00:02.141) 0:01:54.803 ********* 2026-03-24 03:21:08.341861 | orchestrator | 2026-03-24 03:21:08.341866 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-24 03:21:08.341871 | orchestrator | Tuesday 24 March 2026 03:20:42 +0000 (0:00:00.064) 0:01:54.868 ********* 2026-03-24 03:21:08.341875 | orchestrator | 2026-03-24 03:21:08.341880 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-24 03:21:08.341885 | orchestrator | Tuesday 24 March 2026 03:20:42 +0000 (0:00:00.064) 0:01:54.933 ********* 2026-03-24 03:21:08.341890 | orchestrator | 2026-03-24 03:21:08.341894 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-24 03:21:08.341899 | orchestrator | Tuesday 24 March 2026 03:20:42 +0000 (0:00:00.066) 0:01:54.999 ********* 2026-03-24 03:21:08.341904 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:21:08.341909 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:21:08.341914 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:21:08.341918 | orchestrator | 2026-03-24 03:21:08.341923 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:21:08.341929 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-24 03:21:08.341935 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-24 03:21:08.341940 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-24 03:21:08.341945 | orchestrator | 2026-03-24 03:21:08.341950 | orchestrator | 2026-03-24 03:21:08.341955 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:21:08.341960 | orchestrator | Tuesday 24 March 2026 03:21:08 +0000 (0:00:26.001) 0:02:21.000 ********* 2026-03-24 03:21:08.341964 | orchestrator | =============================================================================== 2026-03-24 03:21:08.341969 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.83s 2026-03-24 03:21:08.341974 | orchestrator | glance : Restart glance-api container ---------------------------------- 26.00s 2026-03-24 03:21:08.341979 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.64s 2026-03-24 03:21:08.341987 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 4.69s 2026-03-24 03:21:08.528997 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.15s 2026-03-24 03:21:08.529124 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.83s 2026-03-24 03:21:08.529148 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.65s 2026-03-24 03:21:08.529167 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.58s 2026-03-24 03:21:08.529183 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.45s 2026-03-24 03:21:08.529297 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.38s 2026-03-24 03:21:08.529342 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.33s 2026-03-24 03:21:08.529390 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.32s 2026-03-24 03:21:08.529408 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.28s 2026-03-24 03:21:08.529421 | orchestrator | glance : Copying over config.json files for services -------------------- 3.25s 2026-03-24 03:21:08.529436 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.19s 2026-03-24 03:21:08.529452 | orchestrator | glance : Check glance containers ---------------------------------------- 3.11s 2026-03-24 03:21:08.529469 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 2.90s 2026-03-24 03:21:08.529487 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 2.83s 2026-03-24 03:21:08.529544 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 2.81s 2026-03-24 03:21:08.529562 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 2.81s 2026-03-24 03:21:10.407498 | orchestrator | 2026-03-24 03:21:10 | INFO  | Task bb89f439-c376-4a6b-a473-888c5004b2c2 (cinder) was prepared for execution. 2026-03-24 03:21:10.407593 | orchestrator | 2026-03-24 03:21:10 | INFO  | It takes a moment until task bb89f439-c376-4a6b-a473-888c5004b2c2 (cinder) has been started and output is visible here. 2026-03-24 03:21:44.901627 | orchestrator | 2026-03-24 03:21:44.901749 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:21:44.901765 | orchestrator | 2026-03-24 03:21:44.901776 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:21:44.901785 | orchestrator | Tuesday 24 March 2026 03:21:13 +0000 (0:00:00.185) 0:00:00.185 ********* 2026-03-24 03:21:44.901795 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:21:44.901804 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:21:44.901813 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:21:44.901822 | orchestrator | 2026-03-24 03:21:44.901831 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:21:44.901840 | orchestrator | Tuesday 24 March 2026 03:21:13 +0000 (0:00:00.214) 0:00:00.399 ********* 2026-03-24 03:21:44.901849 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-24 03:21:44.901858 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-24 03:21:44.901867 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-24 03:21:44.901876 | orchestrator | 2026-03-24 03:21:44.901885 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-24 03:21:44.901893 | orchestrator | 2026-03-24 03:21:44.901902 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-24 03:21:44.901911 | orchestrator | Tuesday 24 March 2026 03:21:14 +0000 (0:00:00.363) 0:00:00.763 ********* 2026-03-24 03:21:44.901920 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:21:44.901930 | orchestrator | 2026-03-24 03:21:44.901939 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-24 03:21:44.901948 | orchestrator | Tuesday 24 March 2026 03:21:14 +0000 (0:00:00.491) 0:00:01.254 ********* 2026-03-24 03:21:44.901957 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-24 03:21:44.901965 | orchestrator | 2026-03-24 03:21:44.901975 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-24 03:21:44.901984 | orchestrator | Tuesday 24 March 2026 03:21:18 +0000 (0:00:03.644) 0:00:04.899 ********* 2026-03-24 03:21:44.901993 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-24 03:21:44.902002 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-24 03:21:44.902011 | orchestrator | 2026-03-24 03:21:44.902075 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-24 03:21:44.902146 | orchestrator | Tuesday 24 March 2026 03:21:24 +0000 (0:00:06.501) 0:00:11.400 ********* 2026-03-24 03:21:44.902159 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:21:44.902169 | orchestrator | 2026-03-24 03:21:44.902204 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-24 03:21:44.902215 | orchestrator | Tuesday 24 March 2026 03:21:28 +0000 (0:00:03.231) 0:00:14.631 ********* 2026-03-24 03:21:44.902226 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:21:44.902237 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-24 03:21:44.902245 | orchestrator | 2026-03-24 03:21:44.902254 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-24 03:21:44.902263 | orchestrator | Tuesday 24 March 2026 03:21:32 +0000 (0:00:03.920) 0:00:18.552 ********* 2026-03-24 03:21:44.902271 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:21:44.902280 | orchestrator | 2026-03-24 03:21:44.902289 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-24 03:21:44.902297 | orchestrator | Tuesday 24 March 2026 03:21:35 +0000 (0:00:03.182) 0:00:21.734 ********* 2026-03-24 03:21:44.902306 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-24 03:21:44.902315 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-24 03:21:44.902323 | orchestrator | 2026-03-24 03:21:44.902332 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-24 03:21:44.902340 | orchestrator | Tuesday 24 March 2026 03:21:42 +0000 (0:00:07.612) 0:00:29.347 ********* 2026-03-24 03:21:44.902366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:21:44.902400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:21:44.902410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:21:44.902429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:44.902440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:44.902453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:44.902463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:44.902479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:50.097092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:50.097244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:50.097261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:50.097287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:21:50.097299 | orchestrator | 2026-03-24 03:21:50.097311 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-24 03:21:50.097322 | orchestrator | Tuesday 24 March 2026 03:21:44 +0000 (0:00:02.037) 0:00:31.384 ********* 2026-03-24 03:21:50.097333 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:21:50.097343 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:21:50.097353 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:21:50.097363 | orchestrator | 2026-03-24 03:21:50.097373 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-24 03:21:50.097383 | orchestrator | Tuesday 24 March 2026 03:21:45 +0000 (0:00:00.362) 0:00:31.747 ********* 2026-03-24 03:21:50.097394 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:21:50.097404 | orchestrator | 2026-03-24 03:21:50.097414 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-24 03:21:50.097424 | orchestrator | Tuesday 24 March 2026 03:21:45 +0000 (0:00:00.473) 0:00:32.220 ********* 2026-03-24 03:21:50.097434 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-24 03:21:50.097444 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-24 03:21:50.097454 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-24 03:21:50.097465 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-24 03:21:50.097482 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-24 03:21:50.097491 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-24 03:21:50.097501 | orchestrator | 2026-03-24 03:21:50.097511 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-24 03:21:50.097521 | orchestrator | Tuesday 24 March 2026 03:21:47 +0000 (0:00:01.496) 0:00:33.717 ********* 2026-03-24 03:21:50.097548 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-24 03:21:50.097561 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-24 03:21:50.097578 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-24 03:21:50.097589 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-24 03:21:50.097606 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-24 03:22:00.249611 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-24 03:22:00.249703 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-24 03:22:00.249723 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-24 03:22:00.249728 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-24 03:22:00.249733 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-24 03:22:00.249763 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-24 03:22:00.249773 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-24 03:22:00.249779 | orchestrator | 2026-03-24 03:22:00.249785 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-24 03:22:00.249790 | orchestrator | Tuesday 24 March 2026 03:21:50 +0000 (0:00:02.973) 0:00:36.690 ********* 2026-03-24 03:22:00.249795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:22:00.249803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:22:00.249809 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-24 03:22:00.249815 | orchestrator | 2026-03-24 03:22:00.249822 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-24 03:22:00.249827 | orchestrator | Tuesday 24 March 2026 03:21:51 +0000 (0:00:01.428) 0:00:38.119 ********* 2026-03-24 03:22:00.249836 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-24 03:22:00.249844 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-24 03:22:00.249851 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-24 03:22:00.249856 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-24 03:22:00.249863 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-24 03:22:00.249873 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-24 03:22:00.249879 | orchestrator | 2026-03-24 03:22:00.249884 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-24 03:22:00.249890 | orchestrator | Tuesday 24 March 2026 03:21:54 +0000 (0:00:02.487) 0:00:40.606 ********* 2026-03-24 03:22:00.249897 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-24 03:22:00.249904 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-24 03:22:00.249916 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-24 03:22:00.249922 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-24 03:22:00.249927 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-24 03:22:00.249933 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-24 03:22:00.249939 | orchestrator | 2026-03-24 03:22:00.249945 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-24 03:22:00.249951 | orchestrator | Tuesday 24 March 2026 03:21:55 +0000 (0:00:01.010) 0:00:41.616 ********* 2026-03-24 03:22:00.249958 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:22:00.249964 | orchestrator | 2026-03-24 03:22:00.249970 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-24 03:22:00.249977 | orchestrator | Tuesday 24 March 2026 03:21:55 +0000 (0:00:00.126) 0:00:41.743 ********* 2026-03-24 03:22:00.249983 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:22:00.249989 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:22:00.249995 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:22:00.250001 | orchestrator | 2026-03-24 03:22:00.250007 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-24 03:22:00.250098 | orchestrator | Tuesday 24 March 2026 03:21:55 +0000 (0:00:00.463) 0:00:42.206 ********* 2026-03-24 03:22:00.250111 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:22:00.250119 | orchestrator | 2026-03-24 03:22:00.250123 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-24 03:22:00.250127 | orchestrator | Tuesday 24 March 2026 03:21:56 +0000 (0:00:00.536) 0:00:42.743 ********* 2026-03-24 03:22:00.250142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:01.078863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:01.078978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:01.079017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:01.079144 | orchestrator | 2026-03-24 03:22:01.079157 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-24 03:22:01.079168 | orchestrator | Tuesday 24 March 2026 03:22:00 +0000 (0:00:03.999) 0:00:46.742 ********* 2026-03-24 03:22:01.079219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:01.173131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173329 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:22:01.173340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:01.173350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173409 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:22:01.173418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:01.173428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.173461 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:22:01.173481 | orchestrator | 2026-03-24 03:22:01.173491 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-24 03:22:01.173508 | orchestrator | Tuesday 24 March 2026 03:22:01 +0000 (0:00:00.828) 0:00:47.570 ********* 2026-03-24 03:22:01.705266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:01.705363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.705378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.705389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.705399 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:22:01.705410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:01.705456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.705472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.705482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.705491 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:22:01.705501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:01.705510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:01.705538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:06.091498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:06.091593 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:22:06.091602 | orchestrator | 2026-03-24 03:22:06.091620 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-24 03:22:06.091626 | orchestrator | Tuesday 24 March 2026 03:22:01 +0000 (0:00:00.809) 0:00:48.380 ********* 2026-03-24 03:22:06.091631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:06.091637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:06.091642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:06.091690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:06.091696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:06.091703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:06.091708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:06.091712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:06.091716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:06.091728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.225677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.225789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.225808 | orchestrator | 2026-03-24 03:22:17.225824 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-24 03:22:17.225838 | orchestrator | Tuesday 24 March 2026 03:22:06 +0000 (0:00:04.186) 0:00:52.566 ********* 2026-03-24 03:22:17.225850 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-24 03:22:17.225863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-24 03:22:17.225876 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-24 03:22:17.225888 | orchestrator | 2026-03-24 03:22:17.225901 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-24 03:22:17.225913 | orchestrator | Tuesday 24 March 2026 03:22:07 +0000 (0:00:01.654) 0:00:54.220 ********* 2026-03-24 03:22:17.225927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:17.225965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:17.226000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:17.226010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.226068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.226077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.226092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.226101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:17.226116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:19.448307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:19.448383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:19.448391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:19.448414 | orchestrator | 2026-03-24 03:22:19.448422 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-24 03:22:19.448428 | orchestrator | Tuesday 24 March 2026 03:22:17 +0000 (0:00:09.487) 0:01:03.708 ********* 2026-03-24 03:22:19.448434 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:22:19.448444 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:22:19.448466 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:22:19.448476 | orchestrator | 2026-03-24 03:22:19.448484 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-24 03:22:19.448492 | orchestrator | Tuesday 24 March 2026 03:22:18 +0000 (0:00:01.499) 0:01:05.208 ********* 2026-03-24 03:22:19.448501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:19.448512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:19.448541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:19.448551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:19.448567 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:22:19.448576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:19.448585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:19.448592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:19.448607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:22.942793 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:22:22.942927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-24 03:22:22.942985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:22:22.943006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 03:22:22.943024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 03:22:22.943043 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:22:22.943061 | orchestrator | 2026-03-24 03:22:22.943081 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-24 03:22:22.943099 | orchestrator | Tuesday 24 March 2026 03:22:19 +0000 (0:00:00.716) 0:01:05.924 ********* 2026-03-24 03:22:22.943116 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:22:22.943133 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:22:22.943149 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:22:22.943203 | orchestrator | 2026-03-24 03:22:22.943221 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-24 03:22:22.943237 | orchestrator | Tuesday 24 March 2026 03:22:19 +0000 (0:00:00.427) 0:01:06.352 ********* 2026-03-24 03:22:22.943300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:22.943336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:22.943354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-24 03:22:22.943373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:22.943393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:22.943416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:22:22.943450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:23:53.972727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:23:53.972848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-24 03:23:53.972867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:23:53.972880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:23:53.972909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-24 03:23:53.972945 | orchestrator | 2026-03-24 03:23:53.972959 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-24 03:23:53.972972 | orchestrator | Tuesday 24 March 2026 03:22:23 +0000 (0:00:03.071) 0:01:09.423 ********* 2026-03-24 03:23:53.972983 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:23:53.972995 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:23:53.973006 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:23:53.973017 | orchestrator | 2026-03-24 03:23:53.973028 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-24 03:23:53.973040 | orchestrator | Tuesday 24 March 2026 03:22:23 +0000 (0:00:00.252) 0:01:09.675 ********* 2026-03-24 03:23:53.973051 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:23:53.973062 | orchestrator | 2026-03-24 03:23:53.973091 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-24 03:23:53.973103 | orchestrator | Tuesday 24 March 2026 03:22:25 +0000 (0:00:02.142) 0:01:11.818 ********* 2026-03-24 03:23:53.973114 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:23:53.973125 | orchestrator | 2026-03-24 03:23:53.973192 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-24 03:23:53.973203 | orchestrator | Tuesday 24 March 2026 03:22:27 +0000 (0:00:02.221) 0:01:14.039 ********* 2026-03-24 03:23:53.973214 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:23:53.973225 | orchestrator | 2026-03-24 03:23:53.973236 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-24 03:23:53.973247 | orchestrator | Tuesday 24 March 2026 03:22:46 +0000 (0:00:19.324) 0:01:33.363 ********* 2026-03-24 03:23:53.973260 | orchestrator | 2026-03-24 03:23:53.973272 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-24 03:23:53.973284 | orchestrator | Tuesday 24 March 2026 03:22:47 +0000 (0:00:00.066) 0:01:33.430 ********* 2026-03-24 03:23:53.973296 | orchestrator | 2026-03-24 03:23:53.973309 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-24 03:23:53.973322 | orchestrator | Tuesday 24 March 2026 03:22:47 +0000 (0:00:00.067) 0:01:33.498 ********* 2026-03-24 03:23:53.973334 | orchestrator | 2026-03-24 03:23:53.973346 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-24 03:23:53.973361 | orchestrator | Tuesday 24 March 2026 03:22:47 +0000 (0:00:00.067) 0:01:33.565 ********* 2026-03-24 03:23:53.973379 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:23:53.973398 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:23:53.973414 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:23:53.973430 | orchestrator | 2026-03-24 03:23:53.973447 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-24 03:23:53.973467 | orchestrator | Tuesday 24 March 2026 03:23:08 +0000 (0:00:21.116) 0:01:54.682 ********* 2026-03-24 03:23:53.973485 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:23:53.973504 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:23:53.973523 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:23:53.973541 | orchestrator | 2026-03-24 03:23:53.973561 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-24 03:23:53.973577 | orchestrator | Tuesday 24 March 2026 03:23:18 +0000 (0:00:09.952) 0:02:04.634 ********* 2026-03-24 03:23:53.973590 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:23:53.973603 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:23:53.973613 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:23:53.973624 | orchestrator | 2026-03-24 03:23:53.973635 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-24 03:23:53.973645 | orchestrator | Tuesday 24 March 2026 03:23:43 +0000 (0:00:24.837) 0:02:29.471 ********* 2026-03-24 03:23:53.973656 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:23:53.973666 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:23:53.973677 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:23:53.973700 | orchestrator | 2026-03-24 03:23:53.973711 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-24 03:23:53.973723 | orchestrator | Tuesday 24 March 2026 03:23:53 +0000 (0:00:10.626) 0:02:40.098 ********* 2026-03-24 03:23:53.973733 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:23:53.973744 | orchestrator | 2026-03-24 03:23:53.973755 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:23:53.973766 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-24 03:23:53.973779 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:23:53.973790 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:23:53.973800 | orchestrator | 2026-03-24 03:23:53.973811 | orchestrator | 2026-03-24 03:23:53.973822 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:23:53.973833 | orchestrator | Tuesday 24 March 2026 03:23:53 +0000 (0:00:00.257) 0:02:40.355 ********* 2026-03-24 03:23:53.973844 | orchestrator | =============================================================================== 2026-03-24 03:23:53.973855 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.84s 2026-03-24 03:23:53.973866 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.12s 2026-03-24 03:23:53.973876 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.32s 2026-03-24 03:23:53.973889 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.63s 2026-03-24 03:23:53.973918 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.95s 2026-03-24 03:23:53.973942 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.49s 2026-03-24 03:23:53.973967 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.61s 2026-03-24 03:23:53.973984 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.50s 2026-03-24 03:23:53.974001 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.19s 2026-03-24 03:23:53.974092 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.00s 2026-03-24 03:23:53.974112 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.92s 2026-03-24 03:23:53.974124 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.64s 2026-03-24 03:23:53.974168 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.23s 2026-03-24 03:23:53.974186 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.18s 2026-03-24 03:23:53.974219 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.07s 2026-03-24 03:23:54.275564 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 2.97s 2026-03-24 03:23:54.275654 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.49s 2026-03-24 03:23:54.275662 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.22s 2026-03-24 03:23:54.275667 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.14s 2026-03-24 03:23:54.275672 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.04s 2026-03-24 03:23:56.439693 | orchestrator | 2026-03-24 03:23:56 | INFO  | Task 2a0d1163-e131-4c3d-94e8-ddef47a7e5bd (barbican) was prepared for execution. 2026-03-24 03:23:56.439785 | orchestrator | 2026-03-24 03:23:56 | INFO  | It takes a moment until task 2a0d1163-e131-4c3d-94e8-ddef47a7e5bd (barbican) has been started and output is visible here. 2026-03-24 03:24:40.925423 | orchestrator | 2026-03-24 03:24:40.925537 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:24:40.925576 | orchestrator | 2026-03-24 03:24:40.925588 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:24:40.925598 | orchestrator | Tuesday 24 March 2026 03:24:00 +0000 (0:00:00.243) 0:00:00.243 ********* 2026-03-24 03:24:40.925608 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:24:40.925619 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:24:40.925628 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:24:40.925639 | orchestrator | 2026-03-24 03:24:40.925649 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:24:40.925659 | orchestrator | Tuesday 24 March 2026 03:24:00 +0000 (0:00:00.302) 0:00:00.545 ********* 2026-03-24 03:24:40.925668 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-24 03:24:40.925679 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-24 03:24:40.925688 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-24 03:24:40.925698 | orchestrator | 2026-03-24 03:24:40.925707 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-24 03:24:40.925717 | orchestrator | 2026-03-24 03:24:40.925727 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-24 03:24:40.925736 | orchestrator | Tuesday 24 March 2026 03:24:01 +0000 (0:00:00.384) 0:00:00.929 ********* 2026-03-24 03:24:40.925746 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:24:40.925757 | orchestrator | 2026-03-24 03:24:40.925766 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-24 03:24:40.925776 | orchestrator | Tuesday 24 March 2026 03:24:01 +0000 (0:00:00.516) 0:00:01.446 ********* 2026-03-24 03:24:40.925786 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-24 03:24:40.925796 | orchestrator | 2026-03-24 03:24:40.925805 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-24 03:24:40.925815 | orchestrator | Tuesday 24 March 2026 03:24:05 +0000 (0:00:03.559) 0:00:05.006 ********* 2026-03-24 03:24:40.925824 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-24 03:24:40.925834 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-24 03:24:40.925844 | orchestrator | 2026-03-24 03:24:40.925854 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-24 03:24:40.925863 | orchestrator | Tuesday 24 March 2026 03:24:11 +0000 (0:00:06.695) 0:00:11.702 ********* 2026-03-24 03:24:40.925873 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:24:40.925883 | orchestrator | 2026-03-24 03:24:40.925892 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-24 03:24:40.925902 | orchestrator | Tuesday 24 March 2026 03:24:15 +0000 (0:00:03.420) 0:00:15.122 ********* 2026-03-24 03:24:40.925912 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:24:40.925921 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-24 03:24:40.925931 | orchestrator | 2026-03-24 03:24:40.925940 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-24 03:24:40.925950 | orchestrator | Tuesday 24 March 2026 03:24:19 +0000 (0:00:04.060) 0:00:19.183 ********* 2026-03-24 03:24:40.925960 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:24:40.925969 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-24 03:24:40.925981 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-24 03:24:40.926007 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-24 03:24:40.926073 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-24 03:24:40.926085 | orchestrator | 2026-03-24 03:24:40.926096 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-24 03:24:40.926107 | orchestrator | Tuesday 24 March 2026 03:24:35 +0000 (0:00:16.176) 0:00:35.359 ********* 2026-03-24 03:24:40.926160 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-24 03:24:40.926188 | orchestrator | 2026-03-24 03:24:40.926206 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-24 03:24:40.926223 | orchestrator | Tuesday 24 March 2026 03:24:39 +0000 (0:00:03.878) 0:00:39.238 ********* 2026-03-24 03:24:40.926245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:40.926280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:40.926294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:40.926307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:40.926328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:40.926348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:40.926369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:46.397985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:46.398192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:46.398213 | orchestrator | 2026-03-24 03:24:46.398228 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-24 03:24:46.398241 | orchestrator | Tuesday 24 March 2026 03:24:40 +0000 (0:00:01.568) 0:00:40.806 ********* 2026-03-24 03:24:46.398254 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-24 03:24:46.398277 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-24 03:24:46.398299 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-24 03:24:46.398311 | orchestrator | 2026-03-24 03:24:46.398322 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-24 03:24:46.398333 | orchestrator | Tuesday 24 March 2026 03:24:41 +0000 (0:00:01.035) 0:00:41.841 ********* 2026-03-24 03:24:46.398344 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:24:46.398356 | orchestrator | 2026-03-24 03:24:46.398367 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-24 03:24:46.398404 | orchestrator | Tuesday 24 March 2026 03:24:42 +0000 (0:00:00.284) 0:00:42.126 ********* 2026-03-24 03:24:46.398416 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:24:46.398427 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:24:46.398437 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:24:46.398448 | orchestrator | 2026-03-24 03:24:46.398459 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-24 03:24:46.398470 | orchestrator | Tuesday 24 March 2026 03:24:42 +0000 (0:00:00.277) 0:00:42.404 ********* 2026-03-24 03:24:46.398495 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:24:46.398508 | orchestrator | 2026-03-24 03:24:46.398521 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-24 03:24:46.398533 | orchestrator | Tuesday 24 March 2026 03:24:43 +0000 (0:00:00.542) 0:00:42.947 ********* 2026-03-24 03:24:46.398548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:46.398582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:46.398597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:46.398610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:46.398639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:46.398653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:46.398666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:46.398689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:47.698310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:24:47.698413 | orchestrator | 2026-03-24 03:24:47.698429 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-24 03:24:47.698442 | orchestrator | Tuesday 24 March 2026 03:24:46 +0000 (0:00:03.339) 0:00:46.286 ********* 2026-03-24 03:24:47.698479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:24:47.698507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:24:47.698520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:24:47.698531 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:24:47.698544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:24:47.698573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:24:47.698586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:24:47.698604 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:24:47.698621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:24:47.698634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:24:47.698645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:24:47.698656 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:24:47.698667 | orchestrator | 2026-03-24 03:24:47.698678 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-24 03:24:47.698689 | orchestrator | Tuesday 24 March 2026 03:24:46 +0000 (0:00:00.571) 0:00:46.858 ********* 2026-03-24 03:24:47.698710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:24:51.226236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:24:51.226333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:24:51.226346 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:24:51.226370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:24:51.226380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:24:51.226387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:24:51.226394 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:24:51.226418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:24:51.226484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:24:51.226494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:24:51.226499 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:24:51.226504 | orchestrator | 2026-03-24 03:24:51.226510 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-24 03:24:51.226515 | orchestrator | Tuesday 24 March 2026 03:24:47 +0000 (0:00:00.736) 0:00:47.594 ********* 2026-03-24 03:24:51.226520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:51.226525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:24:51.226540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:25:00.274938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:00.275086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:00.275189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:00.275211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:00.275226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:00.275265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:00.275279 | orchestrator | 2026-03-24 03:25:00.275308 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-24 03:25:00.275322 | orchestrator | Tuesday 24 March 2026 03:24:51 +0000 (0:00:03.521) 0:00:51.115 ********* 2026-03-24 03:25:00.275334 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:25:00.275347 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:25:00.275356 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:25:00.275366 | orchestrator | 2026-03-24 03:25:00.275398 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-24 03:25:00.275411 | orchestrator | Tuesday 24 March 2026 03:24:52 +0000 (0:00:01.494) 0:00:52.609 ********* 2026-03-24 03:25:00.275423 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:25:00.275435 | orchestrator | 2026-03-24 03:25:00.275448 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-24 03:25:00.275459 | orchestrator | Tuesday 24 March 2026 03:24:53 +0000 (0:00:00.873) 0:00:53.483 ********* 2026-03-24 03:25:00.275471 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:25:00.275483 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:25:00.275494 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:25:00.275506 | orchestrator | 2026-03-24 03:25:00.275518 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-24 03:25:00.275529 | orchestrator | Tuesday 24 March 2026 03:24:54 +0000 (0:00:00.517) 0:00:54.001 ********* 2026-03-24 03:25:00.275657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:25:00.275684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:25:00.275709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:25:00.275734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:01.084528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:01.084654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:01.084675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:01.084712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:01.084726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:01.084738 | orchestrator | 2026-03-24 03:25:01.084752 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-24 03:25:01.084767 | orchestrator | Tuesday 24 March 2026 03:25:00 +0000 (0:00:06.166) 0:01:00.168 ********* 2026-03-24 03:25:01.084798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:25:01.084818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:25:01.084831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:25:01.084844 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:25:01.084858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:25:01.084883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:25:01.084895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:25:01.084906 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:25:01.084928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-24 03:25:03.412155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:25:03.412278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:25:03.412324 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:25:03.412343 | orchestrator | 2026-03-24 03:25:03.412358 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-24 03:25:03.412373 | orchestrator | Tuesday 24 March 2026 03:25:01 +0000 (0:00:00.807) 0:01:00.975 ********* 2026-03-24 03:25:03.412386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:25:03.412401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:25:03.412438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-24 03:25:03.412463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:03.412489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:03.412504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:03.412518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:03.412532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:03.412546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:25:03.412561 | orchestrator | 2026-03-24 03:25:03.412575 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-24 03:25:03.412598 | orchestrator | Tuesday 24 March 2026 03:25:03 +0000 (0:00:02.322) 0:01:03.298 ********* 2026-03-24 03:25:40.772379 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:25:40.772471 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:25:40.772481 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:25:40.772489 | orchestrator | 2026-03-24 03:25:40.772512 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-24 03:25:40.772538 | orchestrator | Tuesday 24 March 2026 03:25:03 +0000 (0:00:00.251) 0:01:03.550 ********* 2026-03-24 03:25:40.772545 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:25:40.772551 | orchestrator | 2026-03-24 03:25:40.772558 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-24 03:25:40.772564 | orchestrator | Tuesday 24 March 2026 03:25:05 +0000 (0:00:02.163) 0:01:05.713 ********* 2026-03-24 03:25:40.772570 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:25:40.772577 | orchestrator | 2026-03-24 03:25:40.772583 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-24 03:25:40.772590 | orchestrator | Tuesday 24 March 2026 03:25:08 +0000 (0:00:02.208) 0:01:07.922 ********* 2026-03-24 03:25:40.772596 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:25:40.772603 | orchestrator | 2026-03-24 03:25:40.772609 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-24 03:25:40.772616 | orchestrator | Tuesday 24 March 2026 03:25:19 +0000 (0:00:11.847) 0:01:19.769 ********* 2026-03-24 03:25:40.772622 | orchestrator | 2026-03-24 03:25:40.772628 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-24 03:25:40.772634 | orchestrator | Tuesday 24 March 2026 03:25:19 +0000 (0:00:00.060) 0:01:19.830 ********* 2026-03-24 03:25:40.772641 | orchestrator | 2026-03-24 03:25:40.772647 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-24 03:25:40.772653 | orchestrator | Tuesday 24 March 2026 03:25:19 +0000 (0:00:00.060) 0:01:19.891 ********* 2026-03-24 03:25:40.772659 | orchestrator | 2026-03-24 03:25:40.772666 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-24 03:25:40.772672 | orchestrator | Tuesday 24 March 2026 03:25:20 +0000 (0:00:00.083) 0:01:19.974 ********* 2026-03-24 03:25:40.772678 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:25:40.772684 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:25:40.772690 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:25:40.772696 | orchestrator | 2026-03-24 03:25:40.772702 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-24 03:25:40.772709 | orchestrator | Tuesday 24 March 2026 03:25:30 +0000 (0:00:10.831) 0:01:30.805 ********* 2026-03-24 03:25:40.772715 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:25:40.772722 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:25:40.772728 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:25:40.772735 | orchestrator | 2026-03-24 03:25:40.772741 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-24 03:25:40.772747 | orchestrator | Tuesday 24 March 2026 03:25:35 +0000 (0:00:04.636) 0:01:35.442 ********* 2026-03-24 03:25:40.772753 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:25:40.772760 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:25:40.772767 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:25:40.772773 | orchestrator | 2026-03-24 03:25:40.772779 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:25:40.772787 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:25:40.772795 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:25:40.772801 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:25:40.772808 | orchestrator | 2026-03-24 03:25:40.772813 | orchestrator | 2026-03-24 03:25:40.772820 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:25:40.772827 | orchestrator | Tuesday 24 March 2026 03:25:40 +0000 (0:00:04.931) 0:01:40.373 ********* 2026-03-24 03:25:40.772833 | orchestrator | =============================================================================== 2026-03-24 03:25:40.772840 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.18s 2026-03-24 03:25:40.772851 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.85s 2026-03-24 03:25:40.772858 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.83s 2026-03-24 03:25:40.772865 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.70s 2026-03-24 03:25:40.772871 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.17s 2026-03-24 03:25:40.772878 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 4.93s 2026-03-24 03:25:40.772884 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.64s 2026-03-24 03:25:40.772890 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.06s 2026-03-24 03:25:40.772896 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.88s 2026-03-24 03:25:40.772903 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.56s 2026-03-24 03:25:40.772909 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.52s 2026-03-24 03:25:40.772916 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.42s 2026-03-24 03:25:40.772922 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.34s 2026-03-24 03:25:40.772928 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.32s 2026-03-24 03:25:40.772935 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.21s 2026-03-24 03:25:40.772955 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.16s 2026-03-24 03:25:40.772962 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.57s 2026-03-24 03:25:40.772974 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.49s 2026-03-24 03:25:40.772981 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.04s 2026-03-24 03:25:40.772987 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.87s 2026-03-24 03:25:42.947739 | orchestrator | 2026-03-24 03:25:42 | INFO  | Task b4653809-d5f9-4aba-9299-6a4cb32ffce8 (designate) was prepared for execution. 2026-03-24 03:25:42.947839 | orchestrator | 2026-03-24 03:25:42 | INFO  | It takes a moment until task b4653809-d5f9-4aba-9299-6a4cb32ffce8 (designate) has been started and output is visible here. 2026-03-24 03:26:14.622378 | orchestrator | 2026-03-24 03:26:14.622671 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:26:14.622694 | orchestrator | 2026-03-24 03:26:14.622708 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:26:14.622721 | orchestrator | Tuesday 24 March 2026 03:25:46 +0000 (0:00:00.247) 0:00:00.247 ********* 2026-03-24 03:26:14.622733 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:26:14.622745 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:26:14.622755 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:26:14.622766 | orchestrator | 2026-03-24 03:26:14.622777 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:26:14.622789 | orchestrator | Tuesday 24 March 2026 03:25:47 +0000 (0:00:00.278) 0:00:00.526 ********* 2026-03-24 03:26:14.622801 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-24 03:26:14.622813 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-24 03:26:14.622825 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-24 03:26:14.622836 | orchestrator | 2026-03-24 03:26:14.622848 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-24 03:26:14.622860 | orchestrator | 2026-03-24 03:26:14.622871 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-24 03:26:14.622883 | orchestrator | Tuesday 24 March 2026 03:25:47 +0000 (0:00:00.401) 0:00:00.928 ********* 2026-03-24 03:26:14.622895 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:26:14.622938 | orchestrator | 2026-03-24 03:26:14.622952 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-24 03:26:14.622966 | orchestrator | Tuesday 24 March 2026 03:25:48 +0000 (0:00:00.520) 0:00:01.448 ********* 2026-03-24 03:26:14.622980 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-24 03:26:14.622995 | orchestrator | 2026-03-24 03:26:14.623008 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-24 03:26:14.623023 | orchestrator | Tuesday 24 March 2026 03:25:51 +0000 (0:00:03.556) 0:00:05.005 ********* 2026-03-24 03:26:14.623037 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-24 03:26:14.623053 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-24 03:26:14.623065 | orchestrator | 2026-03-24 03:26:14.623079 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-24 03:26:14.623132 | orchestrator | Tuesday 24 March 2026 03:25:58 +0000 (0:00:06.661) 0:00:11.667 ********* 2026-03-24 03:26:14.623143 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:26:14.623155 | orchestrator | 2026-03-24 03:26:14.623170 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-24 03:26:14.623183 | orchestrator | Tuesday 24 March 2026 03:26:01 +0000 (0:00:03.167) 0:00:14.834 ********* 2026-03-24 03:26:14.623197 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:26:14.623210 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-24 03:26:14.623224 | orchestrator | 2026-03-24 03:26:14.623239 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-24 03:26:14.623252 | orchestrator | Tuesday 24 March 2026 03:26:05 +0000 (0:00:04.012) 0:00:18.847 ********* 2026-03-24 03:26:14.623265 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:26:14.623279 | orchestrator | 2026-03-24 03:26:14.623292 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-24 03:26:14.623305 | orchestrator | Tuesday 24 March 2026 03:26:08 +0000 (0:00:03.256) 0:00:22.103 ********* 2026-03-24 03:26:14.623316 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-24 03:26:14.623328 | orchestrator | 2026-03-24 03:26:14.623338 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-24 03:26:14.623349 | orchestrator | Tuesday 24 March 2026 03:26:12 +0000 (0:00:03.791) 0:00:25.895 ********* 2026-03-24 03:26:14.623382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:14.623423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:14.623447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:14.623460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:14.623473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:14.623485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:14.623503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:14.623525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.780789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.780903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.780926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.780942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.780958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.780991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.781035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.781045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.781053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.781062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:20.781070 | orchestrator | 2026-03-24 03:26:20.781080 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-24 03:26:20.781162 | orchestrator | Tuesday 24 March 2026 03:26:15 +0000 (0:00:02.790) 0:00:28.686 ********* 2026-03-24 03:26:20.781171 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:26:20.781180 | orchestrator | 2026-03-24 03:26:20.781188 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-24 03:26:20.781196 | orchestrator | Tuesday 24 March 2026 03:26:15 +0000 (0:00:00.130) 0:00:28.816 ********* 2026-03-24 03:26:20.781204 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:26:20.781212 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:26:20.781220 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:26:20.781228 | orchestrator | 2026-03-24 03:26:20.781236 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-24 03:26:20.781244 | orchestrator | Tuesday 24 March 2026 03:26:15 +0000 (0:00:00.458) 0:00:29.275 ********* 2026-03-24 03:26:20.781253 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:26:20.781261 | orchestrator | 2026-03-24 03:26:20.781268 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-24 03:26:20.781284 | orchestrator | Tuesday 24 March 2026 03:26:16 +0000 (0:00:00.584) 0:00:29.860 ********* 2026-03-24 03:26:20.781300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:20.781319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:22.592773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:22.592864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.592991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.593010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.593019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:22.593035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:23.408416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:23.408512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:23.408526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:23.408556 | orchestrator | 2026-03-24 03:26:23.408564 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-24 03:26:23.408572 | orchestrator | Tuesday 24 March 2026 03:26:22 +0000 (0:00:06.028) 0:00:35.889 ********* 2026-03-24 03:26:23.408592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:23.408601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:23.408620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:23.408627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:23.408634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:23.408643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:23.408661 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:26:23.408678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:23.408690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:23.408701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:23.408719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132275 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:26:24.132315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:24.132340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:24.132362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132499 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:26:24.132518 | orchestrator | 2026-03-24 03:26:24.132537 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-24 03:26:24.132557 | orchestrator | Tuesday 24 March 2026 03:26:23 +0000 (0:00:00.922) 0:00:36.812 ********* 2026-03-24 03:26:24.132588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:24.132614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:24.132635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.132667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453758 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:26:24.453795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:24.453808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:24.453818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453891 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:26:24.453904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:24.453914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:24.453922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:24.453951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:28.800896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:28.800980 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:26:28.800988 | orchestrator | 2026-03-24 03:26:28.800993 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-24 03:26:28.800999 | orchestrator | Tuesday 24 March 2026 03:26:24 +0000 (0:00:00.938) 0:00:37.750 ********* 2026-03-24 03:26:28.801015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:28.801022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:28.801026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:28.801052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:28.801058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:28.801066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:28.801070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:28.801075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:28.801079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:28.801135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:28.801145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675823 | orchestrator | 2026-03-24 03:26:39.675831 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-24 03:26:39.675839 | orchestrator | Tuesday 24 March 2026 03:26:30 +0000 (0:00:06.117) 0:00:43.868 ********* 2026-03-24 03:26:39.675850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:39.675859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:39.675871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:39.675879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:39.675891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:47.293288 | orchestrator | 2026-03-24 03:26:47.293296 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-24 03:26:47.293304 | orchestrator | Tuesday 24 March 2026 03:26:43 +0000 (0:00:13.307) 0:00:57.176 ********* 2026-03-24 03:26:47.293314 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-24 03:26:51.395873 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-24 03:26:51.395945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-24 03:26:51.395952 | orchestrator | 2026-03-24 03:26:51.395957 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-24 03:26:51.395961 | orchestrator | Tuesday 24 March 2026 03:26:47 +0000 (0:00:03.412) 0:01:00.588 ********* 2026-03-24 03:26:51.395965 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-24 03:26:51.395969 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-24 03:26:51.395973 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-24 03:26:51.395977 | orchestrator | 2026-03-24 03:26:51.395981 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-24 03:26:51.395995 | orchestrator | Tuesday 24 March 2026 03:26:49 +0000 (0:00:02.331) 0:01:02.920 ********* 2026-03-24 03:26:51.396002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:51.396025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:51.396030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:51.396045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:51.396051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:51.396065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:51.396095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:51.396101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:51.396112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:51.396116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:51.396125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:54.091442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:54.091640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:54.091662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:54.091672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:54.091681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:54.091690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:54.091717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:54.091735 | orchestrator | 2026-03-24 03:26:54.091744 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-24 03:26:54.091753 | orchestrator | Tuesday 24 March 2026 03:26:52 +0000 (0:00:02.803) 0:01:05.724 ********* 2026-03-24 03:26:54.091768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:54.091779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:54.091788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:54.091796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:54.091810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:55.118459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:55.118539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:55.118567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:55.118578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:55.118587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:26:55.118603 | orchestrator | 2026-03-24 03:26:55.118614 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-24 03:26:55.118630 | orchestrator | Tuesday 24 March 2026 03:26:55 +0000 (0:00:02.687) 0:01:08.411 ********* 2026-03-24 03:26:56.036874 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:26:56.036951 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:26:56.036958 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:26:56.036964 | orchestrator | 2026-03-24 03:26:56.036969 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-24 03:26:56.036975 | orchestrator | Tuesday 24 March 2026 03:26:55 +0000 (0:00:00.285) 0:01:08.696 ********* 2026-03-24 03:26:56.036994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:56.037003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:56.037010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:56.037016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:56.037036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:56.037053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:56.037058 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:26:56.037065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:56.037127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:56.037133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:56.037138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:56.037146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:56.037155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:59.419589 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:26:59.419683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-24 03:26:59.419695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 03:26:59.419701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 03:26:59.419708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 03:26:59.419728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 03:26:59.419733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:26:59.419737 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:26:59.419742 | orchestrator | 2026-03-24 03:26:59.419758 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-24 03:26:59.419764 | orchestrator | Tuesday 24 March 2026 03:26:56 +0000 (0:00:00.734) 0:01:09.431 ********* 2026-03-24 03:26:59.419771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:59.419777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:59.419782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-24 03:26:59.419791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:26:59.419799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:27:01.201368 | orchestrator | 2026-03-24 03:27:01.201390 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-24 03:27:01.201411 | orchestrator | Tuesday 24 March 2026 03:27:00 +0000 (0:00:04.793) 0:01:14.225 ********* 2026-03-24 03:27:01.201430 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:27:01.201461 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:28:18.756524 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:28:18.756641 | orchestrator | 2026-03-24 03:28:18.756659 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-24 03:28:18.756689 | orchestrator | Tuesday 24 March 2026 03:27:01 +0000 (0:00:00.272) 0:01:14.498 ********* 2026-03-24 03:28:18.756701 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-24 03:28:18.756713 | orchestrator | 2026-03-24 03:28:18.756724 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-24 03:28:18.756736 | orchestrator | Tuesday 24 March 2026 03:27:03 +0000 (0:00:02.169) 0:01:16.668 ********* 2026-03-24 03:28:18.756747 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-24 03:28:18.756759 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-24 03:28:18.756772 | orchestrator | 2026-03-24 03:28:18.756791 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-24 03:28:18.756810 | orchestrator | Tuesday 24 March 2026 03:27:05 +0000 (0:00:02.269) 0:01:18.937 ********* 2026-03-24 03:28:18.756830 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.756849 | orchestrator | 2026-03-24 03:28:18.756868 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-24 03:28:18.756887 | orchestrator | Tuesday 24 March 2026 03:27:21 +0000 (0:00:15.756) 0:01:34.694 ********* 2026-03-24 03:28:18.756904 | orchestrator | 2026-03-24 03:28:18.756922 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-24 03:28:18.756939 | orchestrator | Tuesday 24 March 2026 03:27:21 +0000 (0:00:00.069) 0:01:34.764 ********* 2026-03-24 03:28:18.756957 | orchestrator | 2026-03-24 03:28:18.757005 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-24 03:28:18.757025 | orchestrator | Tuesday 24 March 2026 03:27:21 +0000 (0:00:00.066) 0:01:34.831 ********* 2026-03-24 03:28:18.757077 | orchestrator | 2026-03-24 03:28:18.757095 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-24 03:28:18.757113 | orchestrator | Tuesday 24 March 2026 03:27:21 +0000 (0:00:00.067) 0:01:34.898 ********* 2026-03-24 03:28:18.757132 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:28:18.757150 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:28:18.757168 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.757185 | orchestrator | 2026-03-24 03:28:18.757203 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-24 03:28:18.757221 | orchestrator | Tuesday 24 March 2026 03:27:30 +0000 (0:00:08.882) 0:01:43.780 ********* 2026-03-24 03:28:18.757240 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.757258 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:28:18.757277 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:28:18.757296 | orchestrator | 2026-03-24 03:28:18.757316 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-24 03:28:18.757335 | orchestrator | Tuesday 24 March 2026 03:27:35 +0000 (0:00:05.311) 0:01:49.092 ********* 2026-03-24 03:28:18.757353 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:28:18.757372 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.757390 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:28:18.757407 | orchestrator | 2026-03-24 03:28:18.757425 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-24 03:28:18.757443 | orchestrator | Tuesday 24 March 2026 03:27:46 +0000 (0:00:10.400) 0:01:59.492 ********* 2026-03-24 03:28:18.757461 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.757478 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:28:18.757496 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:28:18.757514 | orchestrator | 2026-03-24 03:28:18.757531 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-24 03:28:18.757550 | orchestrator | Tuesday 24 March 2026 03:27:51 +0000 (0:00:05.471) 0:02:04.964 ********* 2026-03-24 03:28:18.757567 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:28:18.757585 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:28:18.757602 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.757621 | orchestrator | 2026-03-24 03:28:18.757639 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-24 03:28:18.757656 | orchestrator | Tuesday 24 March 2026 03:28:00 +0000 (0:00:08.813) 0:02:13.778 ********* 2026-03-24 03:28:18.757673 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.757691 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:28:18.757710 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:28:18.757729 | orchestrator | 2026-03-24 03:28:18.757748 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-24 03:28:18.757766 | orchestrator | Tuesday 24 March 2026 03:28:10 +0000 (0:00:10.476) 0:02:24.255 ********* 2026-03-24 03:28:18.757782 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:28:18.757799 | orchestrator | 2026-03-24 03:28:18.757817 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:28:18.757836 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:28:18.757856 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:28:18.757874 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:28:18.757891 | orchestrator | 2026-03-24 03:28:18.757909 | orchestrator | 2026-03-24 03:28:18.757926 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:28:18.757965 | orchestrator | Tuesday 24 March 2026 03:28:18 +0000 (0:00:07.488) 0:02:31.744 ********* 2026-03-24 03:28:18.757985 | orchestrator | =============================================================================== 2026-03-24 03:28:18.758002 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.76s 2026-03-24 03:28:18.758129 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.31s 2026-03-24 03:28:18.758183 | orchestrator | designate : Restart designate-worker container ------------------------- 10.48s 2026-03-24 03:28:18.758206 | orchestrator | designate : Restart designate-central container ------------------------ 10.40s 2026-03-24 03:28:18.758239 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.88s 2026-03-24 03:28:18.758261 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.81s 2026-03-24 03:28:18.758273 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.49s 2026-03-24 03:28:18.758284 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.66s 2026-03-24 03:28:18.758294 | orchestrator | designate : Copying over config.json files for services ----------------- 6.12s 2026-03-24 03:28:18.758305 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.03s 2026-03-24 03:28:18.758316 | orchestrator | designate : Restart designate-producer container ------------------------ 5.47s 2026-03-24 03:28:18.758326 | orchestrator | designate : Restart designate-api container ----------------------------- 5.31s 2026-03-24 03:28:18.758337 | orchestrator | designate : Check designate containers ---------------------------------- 4.79s 2026-03-24 03:28:18.758348 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.01s 2026-03-24 03:28:18.758359 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.79s 2026-03-24 03:28:18.758369 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.56s 2026-03-24 03:28:18.758380 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.41s 2026-03-24 03:28:18.758391 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.26s 2026-03-24 03:28:18.758402 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.17s 2026-03-24 03:28:18.758413 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.80s 2026-03-24 03:28:20.904213 | orchestrator | 2026-03-24 03:28:20 | INFO  | Task 4be0e16b-4c54-447c-9fb2-48eaf4d48975 (octavia) was prepared for execution. 2026-03-24 03:28:20.904324 | orchestrator | 2026-03-24 03:28:20 | INFO  | It takes a moment until task 4be0e16b-4c54-447c-9fb2-48eaf4d48975 (octavia) has been started and output is visible here. 2026-03-24 03:30:27.901078 | orchestrator | 2026-03-24 03:30:27.901171 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:30:27.901182 | orchestrator | 2026-03-24 03:30:27.901192 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:30:27.901220 | orchestrator | Tuesday 24 March 2026 03:28:24 +0000 (0:00:00.245) 0:00:00.245 ********* 2026-03-24 03:30:27.901232 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:27.901252 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:30:27.901259 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:30:27.901266 | orchestrator | 2026-03-24 03:30:27.901272 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:30:27.901279 | orchestrator | Tuesday 24 March 2026 03:28:25 +0000 (0:00:00.302) 0:00:00.547 ********* 2026-03-24 03:30:27.901285 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-24 03:30:27.901292 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-24 03:30:27.901299 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-24 03:30:27.901305 | orchestrator | 2026-03-24 03:30:27.901312 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-24 03:30:27.901318 | orchestrator | 2026-03-24 03:30:27.901325 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-24 03:30:27.901350 | orchestrator | Tuesday 24 March 2026 03:28:25 +0000 (0:00:00.420) 0:00:00.967 ********* 2026-03-24 03:30:27.901357 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:30:27.901364 | orchestrator | 2026-03-24 03:30:27.901370 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-24 03:30:27.901376 | orchestrator | Tuesday 24 March 2026 03:28:26 +0000 (0:00:00.513) 0:00:01.481 ********* 2026-03-24 03:30:27.901383 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-24 03:30:27.901389 | orchestrator | 2026-03-24 03:30:27.901395 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-24 03:30:27.901401 | orchestrator | Tuesday 24 March 2026 03:28:29 +0000 (0:00:03.684) 0:00:05.166 ********* 2026-03-24 03:30:27.901407 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-24 03:30:27.901414 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-24 03:30:27.901420 | orchestrator | 2026-03-24 03:30:27.901426 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-24 03:30:27.901432 | orchestrator | Tuesday 24 March 2026 03:28:36 +0000 (0:00:06.686) 0:00:11.852 ********* 2026-03-24 03:30:27.901439 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:30:27.901445 | orchestrator | 2026-03-24 03:30:27.901451 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-24 03:30:27.901457 | orchestrator | Tuesday 24 March 2026 03:28:39 +0000 (0:00:03.347) 0:00:15.200 ********* 2026-03-24 03:30:27.901464 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:30:27.901470 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-24 03:30:27.901476 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-24 03:30:27.901482 | orchestrator | 2026-03-24 03:30:27.901488 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-24 03:30:27.901494 | orchestrator | Tuesday 24 March 2026 03:28:48 +0000 (0:00:08.459) 0:00:23.659 ********* 2026-03-24 03:30:27.901501 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:30:27.901507 | orchestrator | 2026-03-24 03:30:27.901513 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-24 03:30:27.901531 | orchestrator | Tuesday 24 March 2026 03:28:51 +0000 (0:00:03.267) 0:00:26.926 ********* 2026-03-24 03:30:27.901537 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-24 03:30:27.901543 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-24 03:30:27.901549 | orchestrator | 2026-03-24 03:30:27.901556 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-24 03:30:27.901562 | orchestrator | Tuesday 24 March 2026 03:28:59 +0000 (0:00:07.351) 0:00:34.278 ********* 2026-03-24 03:30:27.901568 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-24 03:30:27.901574 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-24 03:30:27.901582 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-24 03:30:27.901593 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-24 03:30:27.901603 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-24 03:30:27.901620 | orchestrator | 2026-03-24 03:30:27.901632 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-24 03:30:27.901644 | orchestrator | Tuesday 24 March 2026 03:29:15 +0000 (0:00:16.142) 0:00:50.421 ********* 2026-03-24 03:30:27.901655 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:30:27.901665 | orchestrator | 2026-03-24 03:30:27.901676 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-24 03:30:27.901695 | orchestrator | Tuesday 24 March 2026 03:29:15 +0000 (0:00:00.736) 0:00:51.158 ********* 2026-03-24 03:30:27.901707 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.901718 | orchestrator | 2026-03-24 03:30:27.901729 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-24 03:30:27.901741 | orchestrator | Tuesday 24 March 2026 03:29:21 +0000 (0:00:05.149) 0:00:56.307 ********* 2026-03-24 03:30:27.901754 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.901765 | orchestrator | 2026-03-24 03:30:27.901774 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-24 03:30:27.901797 | orchestrator | Tuesday 24 March 2026 03:29:24 +0000 (0:00:03.771) 0:01:00.079 ********* 2026-03-24 03:30:27.901804 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:27.901811 | orchestrator | 2026-03-24 03:30:27.901818 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-24 03:30:27.901825 | orchestrator | Tuesday 24 March 2026 03:29:28 +0000 (0:00:03.251) 0:01:03.331 ********* 2026-03-24 03:30:27.901832 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-24 03:30:27.901839 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-24 03:30:27.901846 | orchestrator | 2026-03-24 03:30:27.901854 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-24 03:30:27.901861 | orchestrator | Tuesday 24 March 2026 03:29:37 +0000 (0:00:09.910) 0:01:13.241 ********* 2026-03-24 03:30:27.901868 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-24 03:30:27.901875 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-24 03:30:27.901884 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-24 03:30:27.901891 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-24 03:30:27.901898 | orchestrator | 2026-03-24 03:30:27.901906 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-24 03:30:27.901912 | orchestrator | Tuesday 24 March 2026 03:29:54 +0000 (0:00:16.179) 0:01:29.421 ********* 2026-03-24 03:30:27.901923 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.901929 | orchestrator | 2026-03-24 03:30:27.901935 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-24 03:30:27.901942 | orchestrator | Tuesday 24 March 2026 03:29:58 +0000 (0:00:04.527) 0:01:33.948 ********* 2026-03-24 03:30:27.901948 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.901954 | orchestrator | 2026-03-24 03:30:27.901960 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-24 03:30:27.901966 | orchestrator | Tuesday 24 March 2026 03:30:04 +0000 (0:00:05.405) 0:01:39.353 ********* 2026-03-24 03:30:27.901972 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:30:27.901978 | orchestrator | 2026-03-24 03:30:27.902010 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-24 03:30:27.902063 | orchestrator | Tuesday 24 March 2026 03:30:04 +0000 (0:00:00.192) 0:01:39.546 ********* 2026-03-24 03:30:27.902070 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:27.902076 | orchestrator | 2026-03-24 03:30:27.902082 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-24 03:30:27.902113 | orchestrator | Tuesday 24 March 2026 03:30:08 +0000 (0:00:04.367) 0:01:43.914 ********* 2026-03-24 03:30:27.902120 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:30:27.902127 | orchestrator | 2026-03-24 03:30:27.902134 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-24 03:30:27.902140 | orchestrator | Tuesday 24 March 2026 03:30:09 +0000 (0:00:00.903) 0:01:44.817 ********* 2026-03-24 03:30:27.902153 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:30:27.902159 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.902165 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:30:27.902172 | orchestrator | 2026-03-24 03:30:27.902178 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-24 03:30:27.902190 | orchestrator | Tuesday 24 March 2026 03:30:15 +0000 (0:00:05.536) 0:01:50.354 ********* 2026-03-24 03:30:27.902196 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:30:27.902202 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.902208 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:30:27.902215 | orchestrator | 2026-03-24 03:30:27.902221 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-24 03:30:27.902227 | orchestrator | Tuesday 24 March 2026 03:30:20 +0000 (0:00:05.252) 0:01:55.607 ********* 2026-03-24 03:30:27.902233 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.902239 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:30:27.902246 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:30:27.902252 | orchestrator | 2026-03-24 03:30:27.902258 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-24 03:30:27.902264 | orchestrator | Tuesday 24 March 2026 03:30:21 +0000 (0:00:01.009) 0:01:56.616 ********* 2026-03-24 03:30:27.902270 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:30:27.902277 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:27.902283 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:30:27.902289 | orchestrator | 2026-03-24 03:30:27.902295 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-24 03:30:27.902302 | orchestrator | Tuesday 24 March 2026 03:30:23 +0000 (0:00:01.876) 0:01:58.493 ********* 2026-03-24 03:30:27.902308 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.902314 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:30:27.902320 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:30:27.902326 | orchestrator | 2026-03-24 03:30:27.902333 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-24 03:30:27.902339 | orchestrator | Tuesday 24 March 2026 03:30:24 +0000 (0:00:01.230) 0:01:59.723 ********* 2026-03-24 03:30:27.902345 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.902351 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:30:27.902357 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:30:27.902364 | orchestrator | 2026-03-24 03:30:27.902370 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-24 03:30:27.902376 | orchestrator | Tuesday 24 March 2026 03:30:25 +0000 (0:00:01.161) 0:02:00.884 ********* 2026-03-24 03:30:27.902382 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:30:27.902388 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:27.902395 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:30:27.902401 | orchestrator | 2026-03-24 03:30:27.902413 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-24 03:30:53.969503 | orchestrator | Tuesday 24 March 2026 03:30:27 +0000 (0:00:02.260) 0:02:03.145 ********* 2026-03-24 03:30:53.969596 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:30:53.969605 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:30:53.969610 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:30:53.969615 | orchestrator | 2026-03-24 03:30:53.969620 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-24 03:30:53.969625 | orchestrator | Tuesday 24 March 2026 03:30:29 +0000 (0:00:01.434) 0:02:04.579 ********* 2026-03-24 03:30:53.969629 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:53.969634 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:30:53.969638 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:30:53.969643 | orchestrator | 2026-03-24 03:30:53.969650 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-24 03:30:53.969657 | orchestrator | Tuesday 24 March 2026 03:30:29 +0000 (0:00:00.651) 0:02:05.230 ********* 2026-03-24 03:30:53.969665 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:30:53.969693 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:53.969701 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:30:53.969708 | orchestrator | 2026-03-24 03:30:53.969715 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-24 03:30:53.969723 | orchestrator | Tuesday 24 March 2026 03:30:33 +0000 (0:00:03.071) 0:02:08.302 ********* 2026-03-24 03:30:53.969730 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:30:53.969738 | orchestrator | 2026-03-24 03:30:53.969745 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-24 03:30:53.969751 | orchestrator | Tuesday 24 March 2026 03:30:33 +0000 (0:00:00.507) 0:02:08.809 ********* 2026-03-24 03:30:53.969758 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:53.969769 | orchestrator | 2026-03-24 03:30:53.969776 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-24 03:30:53.969783 | orchestrator | Tuesday 24 March 2026 03:30:37 +0000 (0:00:04.140) 0:02:12.949 ********* 2026-03-24 03:30:53.969789 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:53.969795 | orchestrator | 2026-03-24 03:30:53.969802 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-24 03:30:53.969808 | orchestrator | Tuesday 24 March 2026 03:30:40 +0000 (0:00:03.236) 0:02:16.186 ********* 2026-03-24 03:30:53.969815 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-24 03:30:53.969822 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-24 03:30:53.969828 | orchestrator | 2026-03-24 03:30:53.969834 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-24 03:30:53.969841 | orchestrator | Tuesday 24 March 2026 03:30:47 +0000 (0:00:07.020) 0:02:23.207 ********* 2026-03-24 03:30:53.969848 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:53.969854 | orchestrator | 2026-03-24 03:30:53.969861 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-24 03:30:53.969868 | orchestrator | Tuesday 24 March 2026 03:30:51 +0000 (0:00:03.463) 0:02:26.670 ********* 2026-03-24 03:30:53.969875 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:30:53.969881 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:30:53.969888 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:30:53.969895 | orchestrator | 2026-03-24 03:30:53.969899 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-24 03:30:53.969904 | orchestrator | Tuesday 24 March 2026 03:30:51 +0000 (0:00:00.504) 0:02:27.175 ********* 2026-03-24 03:30:53.969922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:30:53.969943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:30:53.969955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:30:53.969959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:30:53.969966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:30:53.970008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:30:53.970052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:30:53.970058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:30:53.970073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:30:55.334816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:30:55.334908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:30:55.334919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:30:55.334944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:30:55.334953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:30:55.335062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:30:55.335073 | orchestrator | 2026-03-24 03:30:55.335082 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-24 03:30:55.335091 | orchestrator | Tuesday 24 March 2026 03:30:54 +0000 (0:00:02.490) 0:02:29.665 ********* 2026-03-24 03:30:55.335097 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:30:55.335105 | orchestrator | 2026-03-24 03:30:55.335112 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-24 03:30:55.335118 | orchestrator | Tuesday 24 March 2026 03:30:54 +0000 (0:00:00.123) 0:02:29.788 ********* 2026-03-24 03:30:55.335124 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:30:55.335147 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:30:55.335155 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:30:55.335161 | orchestrator | 2026-03-24 03:30:55.335168 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-24 03:30:55.335175 | orchestrator | Tuesday 24 March 2026 03:30:54 +0000 (0:00:00.272) 0:02:30.061 ********* 2026-03-24 03:30:55.335183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:30:55.335191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:30:55.335218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:30:55.335227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:30:55.335239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:30:55.335245 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:30:55.335258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:00.136842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:00.136953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:00.137105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:00.137130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:00.137172 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:31:00.137191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:00.137207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:00.137242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:00.137257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:00.137277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:00.137299 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:31:00.137313 | orchestrator | 2026-03-24 03:31:00.137340 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-24 03:31:00.137365 | orchestrator | Tuesday 24 March 2026 03:30:55 +0000 (0:00:00.610) 0:02:30.672 ********* 2026-03-24 03:31:00.137380 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:31:00.137392 | orchestrator | 2026-03-24 03:31:00.137406 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-24 03:31:00.137419 | orchestrator | Tuesday 24 March 2026 03:30:56 +0000 (0:00:00.670) 0:02:31.342 ********* 2026-03-24 03:31:00.137433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:00.137448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:00.137471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:01.598361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:01.598601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:01.598633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:01.598645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:01.598763 | orchestrator | 2026-03-24 03:31:01.598774 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-24 03:31:01.598784 | orchestrator | Tuesday 24 March 2026 03:31:01 +0000 (0:00:04.980) 0:02:36.323 ********* 2026-03-24 03:31:01.598801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:01.692430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:01.692531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:01.692545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:01.692556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:01.692566 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:31:01.692576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:01.692586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:01.692640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:01.692656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:01.692666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:01.692675 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:31:01.692684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:01.692692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:01.692700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:01.692724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:02.405588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:02.405660 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:31:02.405667 | orchestrator | 2026-03-24 03:31:02.405672 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-24 03:31:02.405677 | orchestrator | Tuesday 24 March 2026 03:31:01 +0000 (0:00:00.619) 0:02:36.943 ********* 2026-03-24 03:31:02.405682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:02.405688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:02.405693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:02.405698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:02.405724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:02.405729 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:31:02.405736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:02.405740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:02.405744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:02.405748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:02.405756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:02.405759 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:31:02.405769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 03:31:06.904566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 03:31:06.904638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 03:31:06.904645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 03:31:06.904651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 03:31:06.904671 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:31:06.904676 | orchestrator | 2026-03-24 03:31:06.904681 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-24 03:31:06.904686 | orchestrator | Tuesday 24 March 2026 03:31:02 +0000 (0:00:01.151) 0:02:38.094 ********* 2026-03-24 03:31:06.904691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:06.904716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:06.904721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:06.904725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:06.904730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:06.904737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:06.904741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:06.904750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:21.774767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:21.774871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:21.774883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:21.774910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:21.774919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:21.774927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:21.774999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:21.775008 | orchestrator | 2026-03-24 03:31:21.775016 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-24 03:31:21.775024 | orchestrator | Tuesday 24 March 2026 03:31:07 +0000 (0:00:05.029) 0:02:43.124 ********* 2026-03-24 03:31:21.775030 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-24 03:31:21.775038 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-24 03:31:21.775044 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-24 03:31:21.775050 | orchestrator | 2026-03-24 03:31:21.775056 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-24 03:31:21.775063 | orchestrator | Tuesday 24 March 2026 03:31:09 +0000 (0:00:01.583) 0:02:44.707 ********* 2026-03-24 03:31:21.775071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:21.775086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:21.775093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:21.775109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:36.117705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:36.117805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:36.117816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:31:36.117936 | orchestrator | 2026-03-24 03:31:36.117944 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-24 03:31:36.117953 | orchestrator | Tuesday 24 March 2026 03:31:24 +0000 (0:00:15.215) 0:02:59.922 ********* 2026-03-24 03:31:36.118121 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:31:36.118132 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:31:36.118140 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:31:36.118147 | orchestrator | 2026-03-24 03:31:36.118154 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-24 03:31:36.118162 | orchestrator | Tuesday 24 March 2026 03:31:26 +0000 (0:00:01.542) 0:03:01.465 ********* 2026-03-24 03:31:36.118169 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-24 03:31:36.118176 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-24 03:31:36.118183 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-24 03:31:36.118190 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-24 03:31:36.118198 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-24 03:31:36.118204 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-24 03:31:36.118211 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-24 03:31:36.118219 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-24 03:31:36.118226 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-24 03:31:36.118233 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-24 03:31:36.118240 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-24 03:31:36.118247 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-24 03:31:36.118254 | orchestrator | 2026-03-24 03:31:36.118262 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-24 03:31:36.118276 | orchestrator | Tuesday 24 March 2026 03:31:30 +0000 (0:00:04.779) 0:03:06.245 ********* 2026-03-24 03:31:36.118285 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-24 03:31:36.118293 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-24 03:31:36.118308 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-24 03:31:44.419579 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-24 03:31:44.419699 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-24 03:31:44.419706 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-24 03:31:44.419711 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-24 03:31:44.419716 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-24 03:31:44.419721 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-24 03:31:44.419726 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-24 03:31:44.419731 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-24 03:31:44.419735 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-24 03:31:44.419740 | orchestrator | 2026-03-24 03:31:44.419746 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-24 03:31:44.419752 | orchestrator | Tuesday 24 March 2026 03:31:36 +0000 (0:00:05.113) 0:03:11.358 ********* 2026-03-24 03:31:44.419757 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-24 03:31:44.419761 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-24 03:31:44.419765 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-24 03:31:44.419770 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-24 03:31:44.419775 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-24 03:31:44.419779 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-24 03:31:44.419783 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-24 03:31:44.419788 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-24 03:31:44.419792 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-24 03:31:44.419796 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-24 03:31:44.419801 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-24 03:31:44.419805 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-24 03:31:44.419809 | orchestrator | 2026-03-24 03:31:44.419814 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-24 03:31:44.419819 | orchestrator | Tuesday 24 March 2026 03:31:41 +0000 (0:00:05.172) 0:03:16.531 ********* 2026-03-24 03:31:44.419827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:44.419836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:44.419899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 03:31:44.419906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:44.419914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:44.419919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-24 03:31:44.419924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:44.419931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:44.419944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-24 03:31:44.419970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:33:03.446376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:33:03.446478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-24 03:33:03.446489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:03.446496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:03.446521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:03.446529 | orchestrator | 2026-03-24 03:33:03.446537 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-24 03:33:03.446545 | orchestrator | Tuesday 24 March 2026 03:31:45 +0000 (0:00:03.968) 0:03:20.499 ********* 2026-03-24 03:33:03.446551 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:03.446558 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:03.446564 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:03.446570 | orchestrator | 2026-03-24 03:33:03.446589 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-24 03:33:03.446595 | orchestrator | Tuesday 24 March 2026 03:31:45 +0000 (0:00:00.294) 0:03:20.794 ********* 2026-03-24 03:33:03.446602 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446608 | orchestrator | 2026-03-24 03:33:03.446614 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-24 03:33:03.446620 | orchestrator | Tuesday 24 March 2026 03:31:47 +0000 (0:00:02.157) 0:03:22.951 ********* 2026-03-24 03:33:03.446626 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446632 | orchestrator | 2026-03-24 03:33:03.446638 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-24 03:33:03.446645 | orchestrator | Tuesday 24 March 2026 03:31:49 +0000 (0:00:02.242) 0:03:25.194 ********* 2026-03-24 03:33:03.446651 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446657 | orchestrator | 2026-03-24 03:33:03.446663 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-24 03:33:03.446671 | orchestrator | Tuesday 24 March 2026 03:31:52 +0000 (0:00:02.406) 0:03:27.600 ********* 2026-03-24 03:33:03.446688 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446695 | orchestrator | 2026-03-24 03:33:03.446701 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-24 03:33:03.446707 | orchestrator | Tuesday 24 March 2026 03:31:54 +0000 (0:00:02.275) 0:03:29.875 ********* 2026-03-24 03:33:03.446713 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446719 | orchestrator | 2026-03-24 03:33:03.446726 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-24 03:33:03.446732 | orchestrator | Tuesday 24 March 2026 03:32:17 +0000 (0:00:22.857) 0:03:52.733 ********* 2026-03-24 03:33:03.446738 | orchestrator | 2026-03-24 03:33:03.446744 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-24 03:33:03.446750 | orchestrator | Tuesday 24 March 2026 03:32:17 +0000 (0:00:00.066) 0:03:52.799 ********* 2026-03-24 03:33:03.446757 | orchestrator | 2026-03-24 03:33:03.446763 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-24 03:33:03.446769 | orchestrator | Tuesday 24 March 2026 03:32:17 +0000 (0:00:00.065) 0:03:52.864 ********* 2026-03-24 03:33:03.446775 | orchestrator | 2026-03-24 03:33:03.446781 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-24 03:33:03.446787 | orchestrator | Tuesday 24 March 2026 03:32:17 +0000 (0:00:00.064) 0:03:52.928 ********* 2026-03-24 03:33:03.446793 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446800 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:33:03.446806 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:33:03.446812 | orchestrator | 2026-03-24 03:33:03.446818 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-24 03:33:03.446824 | orchestrator | Tuesday 24 March 2026 03:32:33 +0000 (0:00:15.329) 0:04:08.258 ********* 2026-03-24 03:33:03.446836 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446843 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:33:03.446849 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:33:03.446855 | orchestrator | 2026-03-24 03:33:03.446861 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-24 03:33:03.446868 | orchestrator | Tuesday 24 March 2026 03:32:38 +0000 (0:00:05.992) 0:04:14.251 ********* 2026-03-24 03:33:03.446874 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446880 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:33:03.446886 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:33:03.446892 | orchestrator | 2026-03-24 03:33:03.446898 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-24 03:33:03.446906 | orchestrator | Tuesday 24 March 2026 03:32:49 +0000 (0:00:10.183) 0:04:24.434 ********* 2026-03-24 03:33:03.446913 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.446920 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:33:03.447018 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:33:03.447030 | orchestrator | 2026-03-24 03:33:03.447040 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-24 03:33:03.447050 | orchestrator | Tuesday 24 March 2026 03:32:54 +0000 (0:00:05.518) 0:04:29.953 ********* 2026-03-24 03:33:03.447060 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:33:03.447069 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:33:03.447078 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:33:03.447088 | orchestrator | 2026-03-24 03:33:03.447098 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:33:03.447109 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:33:03.447121 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:33:03.447132 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:33:03.447142 | orchestrator | 2026-03-24 03:33:03.447154 | orchestrator | 2026-03-24 03:33:03.447161 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:33:03.447167 | orchestrator | Tuesday 24 March 2026 03:33:03 +0000 (0:00:08.720) 0:04:38.674 ********* 2026-03-24 03:33:03.447173 | orchestrator | =============================================================================== 2026-03-24 03:33:03.447180 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.86s 2026-03-24 03:33:03.447186 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.18s 2026-03-24 03:33:03.447192 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.14s 2026-03-24 03:33:03.447198 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.33s 2026-03-24 03:33:03.447204 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.22s 2026-03-24 03:33:03.447216 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.18s 2026-03-24 03:33:03.447223 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.91s 2026-03-24 03:33:03.447229 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.72s 2026-03-24 03:33:03.447235 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.46s 2026-03-24 03:33:03.447241 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.35s 2026-03-24 03:33:03.447247 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.02s 2026-03-24 03:33:03.447253 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.69s 2026-03-24 03:33:03.447259 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 5.99s 2026-03-24 03:33:03.447272 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.54s 2026-03-24 03:33:03.447287 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.52s 2026-03-24 03:33:03.725479 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.41s 2026-03-24 03:33:03.725545 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.25s 2026-03-24 03:33:03.725550 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.17s 2026-03-24 03:33:03.725555 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.15s 2026-03-24 03:33:03.725559 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.11s 2026-03-24 03:33:06.199403 | orchestrator | 2026-03-24 03:33:06 | INFO  | Task eef4e077-1887-4656-a305-2ebfb4e30a33 (ceilometer) was prepared for execution. 2026-03-24 03:33:06.199462 | orchestrator | 2026-03-24 03:33:06 | INFO  | It takes a moment until task eef4e077-1887-4656-a305-2ebfb4e30a33 (ceilometer) has been started and output is visible here. 2026-03-24 03:33:29.916624 | orchestrator | 2026-03-24 03:33:29.916819 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:33:29.916851 | orchestrator | 2026-03-24 03:33:29.916872 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:33:29.916893 | orchestrator | Tuesday 24 March 2026 03:33:10 +0000 (0:00:00.253) 0:00:00.253 ********* 2026-03-24 03:33:29.916973 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:33:29.917000 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:33:29.917020 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:33:29.917040 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:33:29.917059 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:33:29.917078 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:33:29.917098 | orchestrator | 2026-03-24 03:33:29.917116 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:33:29.917135 | orchestrator | Tuesday 24 March 2026 03:33:10 +0000 (0:00:00.685) 0:00:00.939 ********* 2026-03-24 03:33:29.917156 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-24 03:33:29.917176 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-24 03:33:29.917196 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-24 03:33:29.917217 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-24 03:33:29.917238 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-24 03:33:29.917260 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-24 03:33:29.917280 | orchestrator | 2026-03-24 03:33:29.917301 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-24 03:33:29.917321 | orchestrator | 2026-03-24 03:33:29.917341 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-24 03:33:29.917361 | orchestrator | Tuesday 24 March 2026 03:33:11 +0000 (0:00:00.606) 0:00:01.545 ********* 2026-03-24 03:33:29.917384 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:33:29.917405 | orchestrator | 2026-03-24 03:33:29.917425 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-24 03:33:29.917446 | orchestrator | Tuesday 24 March 2026 03:33:12 +0000 (0:00:01.168) 0:00:02.714 ********* 2026-03-24 03:33:29.917467 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:29.917487 | orchestrator | 2026-03-24 03:33:29.917500 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-24 03:33:29.917511 | orchestrator | Tuesday 24 March 2026 03:33:12 +0000 (0:00:00.128) 0:00:02.842 ********* 2026-03-24 03:33:29.917522 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:29.917533 | orchestrator | 2026-03-24 03:33:29.917544 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-24 03:33:29.917587 | orchestrator | Tuesday 24 March 2026 03:33:13 +0000 (0:00:00.119) 0:00:02.962 ********* 2026-03-24 03:33:29.917599 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:33:29.917610 | orchestrator | 2026-03-24 03:33:29.917621 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-24 03:33:29.917632 | orchestrator | Tuesday 24 March 2026 03:33:16 +0000 (0:00:03.830) 0:00:06.793 ********* 2026-03-24 03:33:29.917643 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:33:29.917653 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-24 03:33:29.917664 | orchestrator | 2026-03-24 03:33:29.917675 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-24 03:33:29.917686 | orchestrator | Tuesday 24 March 2026 03:33:20 +0000 (0:00:04.134) 0:00:10.927 ********* 2026-03-24 03:33:29.917696 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:33:29.917707 | orchestrator | 2026-03-24 03:33:29.917718 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-24 03:33:29.917746 | orchestrator | Tuesday 24 March 2026 03:33:24 +0000 (0:00:03.370) 0:00:14.298 ********* 2026-03-24 03:33:29.917757 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-24 03:33:29.917768 | orchestrator | 2026-03-24 03:33:29.917779 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-24 03:33:29.917789 | orchestrator | Tuesday 24 March 2026 03:33:28 +0000 (0:00:04.020) 0:00:18.318 ********* 2026-03-24 03:33:29.917800 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:29.917810 | orchestrator | 2026-03-24 03:33:29.917821 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-24 03:33:29.917832 | orchestrator | Tuesday 24 March 2026 03:33:28 +0000 (0:00:00.126) 0:00:18.444 ********* 2026-03-24 03:33:29.917846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:29.917889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:29.917902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:29.917959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:29.917998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:29.918108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:29.918140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:29.918176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:33.998688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:33.998783 | orchestrator | 2026-03-24 03:33:33.998797 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-24 03:33:33.998832 | orchestrator | Tuesday 24 March 2026 03:33:29 +0000 (0:00:01.411) 0:00:19.856 ********* 2026-03-24 03:33:33.998841 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 03:33:33.998851 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 03:33:33.998860 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:33:33.998868 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:33:33.998877 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:33:33.998886 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:33:33.998894 | orchestrator | 2026-03-24 03:33:33.998904 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-24 03:33:33.998973 | orchestrator | Tuesday 24 March 2026 03:33:31 +0000 (0:00:01.489) 0:00:21.346 ********* 2026-03-24 03:33:33.998984 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:33:33.998994 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:33:33.999003 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:33:33.999012 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:33:33.999020 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:33:33.999029 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:33:33.999038 | orchestrator | 2026-03-24 03:33:33.999051 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-24 03:33:33.999066 | orchestrator | Tuesday 24 March 2026 03:33:31 +0000 (0:00:00.543) 0:00:21.889 ********* 2026-03-24 03:33:33.999092 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:33.999106 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:33.999147 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:33.999162 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:33.999175 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:33.999187 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:33.999202 | orchestrator | 2026-03-24 03:33:33.999217 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-24 03:33:33.999232 | orchestrator | Tuesday 24 March 2026 03:33:32 +0000 (0:00:00.677) 0:00:22.566 ********* 2026-03-24 03:33:33.999247 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:33:33.999261 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:33:33.999275 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:33:33.999291 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:33:33.999306 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:33:33.999356 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:33:33.999368 | orchestrator | 2026-03-24 03:33:33.999379 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-24 03:33:33.999389 | orchestrator | Tuesday 24 March 2026 03:33:33 +0000 (0:00:00.526) 0:00:23.093 ********* 2026-03-24 03:33:33.999407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:33.999419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:33.999441 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:33.999470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:33.999481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:33.999490 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:33.999499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:33.999508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:33.999522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:33.999533 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:33.999541 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:33.999550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:33.999565 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:33.999582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:37.831450 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:37.831562 | orchestrator | 2026-03-24 03:33:37.831576 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-24 03:33:37.831590 | orchestrator | Tuesday 24 March 2026 03:33:33 +0000 (0:00:00.854) 0:00:23.947 ********* 2026-03-24 03:33:37.831604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:37.831622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:37.831640 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:37.831680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:37.831694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:37.831725 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:37.831737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:37.831750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:37.831779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:37.831793 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:37.831805 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:37.831817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:37.831829 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:37.831841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:37.831848 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:37.831855 | orchestrator | 2026-03-24 03:33:37.831863 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-24 03:33:37.831878 | orchestrator | Tuesday 24 March 2026 03:33:34 +0000 (0:00:00.736) 0:00:24.684 ********* 2026-03-24 03:33:37.831885 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:33:37.831892 | orchestrator | 2026-03-24 03:33:37.831899 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-24 03:33:37.831906 | orchestrator | Tuesday 24 March 2026 03:33:35 +0000 (0:00:00.568) 0:00:25.252 ********* 2026-03-24 03:33:37.831997 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:33:37.832012 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:33:37.832024 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:33:37.832035 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:33:37.832046 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:33:37.832058 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:33:37.832068 | orchestrator | 2026-03-24 03:33:37.832079 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-24 03:33:37.832091 | orchestrator | Tuesday 24 March 2026 03:33:35 +0000 (0:00:00.604) 0:00:25.857 ********* 2026-03-24 03:33:37.832103 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:33:37.832114 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:33:37.832127 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:33:37.832140 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:33:37.832149 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:33:37.832155 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:33:37.832162 | orchestrator | 2026-03-24 03:33:37.832168 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-24 03:33:37.832175 | orchestrator | Tuesday 24 March 2026 03:33:36 +0000 (0:00:00.844) 0:00:26.702 ********* 2026-03-24 03:33:37.832182 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:37.832188 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:37.832195 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:37.832201 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:37.832207 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:37.832214 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:37.832220 | orchestrator | 2026-03-24 03:33:37.832227 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-24 03:33:37.832233 | orchestrator | Tuesday 24 March 2026 03:33:37 +0000 (0:00:00.589) 0:00:27.292 ********* 2026-03-24 03:33:37.832240 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:37.832247 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:37.832259 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:37.832270 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:37.832280 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:37.832290 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:37.832301 | orchestrator | 2026-03-24 03:33:42.192277 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-24 03:33:42.192368 | orchestrator | Tuesday 24 March 2026 03:33:37 +0000 (0:00:00.493) 0:00:27.786 ********* 2026-03-24 03:33:42.192380 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:33:42.192388 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 03:33:42.192395 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 03:33:42.192401 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:33:42.192408 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:33:42.192415 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:33:42.192422 | orchestrator | 2026-03-24 03:33:42.192430 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-24 03:33:42.192437 | orchestrator | Tuesday 24 March 2026 03:33:39 +0000 (0:00:01.237) 0:00:29.024 ********* 2026-03-24 03:33:42.192447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:42.192480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:42.192487 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:42.192508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:42.192516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:42.192524 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:42.192532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:42.192556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:42.192565 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:42.192572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:42.192617 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:42.192625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:42.192633 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:42.192645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:42.192653 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:42.192660 | orchestrator | 2026-03-24 03:33:42.192668 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-24 03:33:42.192675 | orchestrator | Tuesday 24 March 2026 03:33:39 +0000 (0:00:00.776) 0:00:29.800 ********* 2026-03-24 03:33:42.192683 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:42.192691 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:42.192698 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:42.192705 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:42.192713 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:42.192720 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:42.192728 | orchestrator | 2026-03-24 03:33:42.192735 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-24 03:33:42.192742 | orchestrator | Tuesday 24 March 2026 03:33:40 +0000 (0:00:00.727) 0:00:30.527 ********* 2026-03-24 03:33:42.192750 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:33:42.192757 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 03:33:42.192764 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 03:33:42.192771 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:33:42.192778 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:33:42.192785 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:33:42.192793 | orchestrator | 2026-03-24 03:33:42.192800 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-24 03:33:42.192808 | orchestrator | Tuesday 24 March 2026 03:33:41 +0000 (0:00:01.213) 0:00:31.741 ********* 2026-03-24 03:33:42.192822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.499523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:47.499617 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:47.499626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.499644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:47.499656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.499661 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:47.499666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:47.499672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.499693 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:47.499697 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:47.499713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.499718 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:47.499722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.499727 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:47.499731 | orchestrator | 2026-03-24 03:33:47.499736 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-24 03:33:47.499741 | orchestrator | Tuesday 24 March 2026 03:33:42 +0000 (0:00:01.011) 0:00:32.753 ********* 2026-03-24 03:33:47.499746 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:47.499750 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:47.499754 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:47.499758 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:47.499762 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:47.499769 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:47.499773 | orchestrator | 2026-03-24 03:33:47.499778 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-24 03:33:47.499782 | orchestrator | Tuesday 24 March 2026 03:33:43 +0000 (0:00:00.592) 0:00:33.345 ********* 2026-03-24 03:33:47.499786 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:47.499790 | orchestrator | 2026-03-24 03:33:47.499794 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-24 03:33:47.499798 | orchestrator | Tuesday 24 March 2026 03:33:43 +0000 (0:00:00.138) 0:00:33.484 ********* 2026-03-24 03:33:47.499802 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:47.499807 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:47.499811 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:47.499815 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:47.499819 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:47.499823 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:47.499827 | orchestrator | 2026-03-24 03:33:47.499831 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-24 03:33:47.499835 | orchestrator | Tuesday 24 March 2026 03:33:44 +0000 (0:00:00.567) 0:00:34.052 ********* 2026-03-24 03:33:47.499845 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:33:47.499851 | orchestrator | 2026-03-24 03:33:47.499855 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-24 03:33:47.499859 | orchestrator | Tuesday 24 March 2026 03:33:45 +0000 (0:00:01.173) 0:00:35.226 ********* 2026-03-24 03:33:47.499863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:47.499872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:47.937082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:47.937173 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:47.937203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:47.937214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:47.937242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:47.937253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:47.937279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:47.937289 | orchestrator | 2026-03-24 03:33:47.937315 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-24 03:33:47.937326 | orchestrator | Tuesday 24 March 2026 03:33:47 +0000 (0:00:02.222) 0:00:37.448 ********* 2026-03-24 03:33:47.937345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.937360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:47.937378 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:47.937388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.937398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:47.937406 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:47.937416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:47.937433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:49.495064 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:49.495141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495152 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:49.495180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495201 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:49.495205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495209 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:49.495213 | orchestrator | 2026-03-24 03:33:49.495218 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-24 03:33:49.495223 | orchestrator | Tuesday 24 March 2026 03:33:48 +0000 (0:00:00.727) 0:00:38.175 ********* 2026-03-24 03:33:49.495228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:49.495250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:49.495266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495270 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:33:49.495274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:33:49.495278 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:33:49.495282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495286 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:33:49.495290 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:33:49.495294 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:49.495298 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:33:49.495307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:33:56.802308 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:33:56.802413 | orchestrator | 2026-03-24 03:33:56.802451 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-24 03:33:56.802464 | orchestrator | Tuesday 24 March 2026 03:33:49 +0000 (0:00:01.269) 0:00:39.444 ********* 2026-03-24 03:33:56.802493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:56.802684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:56.802703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:33:56.802722 | orchestrator | 2026-03-24 03:33:56.802740 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-24 03:33:56.802758 | orchestrator | Tuesday 24 March 2026 03:33:51 +0000 (0:00:02.452) 0:00:41.897 ********* 2026-03-24 03:33:56.802777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:33:56.802828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:05.958612 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:05.958746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:05.958770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:05.958790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:05.958810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:05.958830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:05.958877 | orchestrator | 2026-03-24 03:34:05.958993 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-24 03:34:05.959029 | orchestrator | Tuesday 24 March 2026 03:33:56 +0000 (0:00:04.855) 0:00:46.753 ********* 2026-03-24 03:34:05.959043 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:34:05.959055 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 03:34:05.959066 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 03:34:05.959077 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:34:05.959088 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:34:05.959100 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:34:05.959113 | orchestrator | 2026-03-24 03:34:05.959126 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-24 03:34:05.959140 | orchestrator | Tuesday 24 March 2026 03:33:58 +0000 (0:00:01.494) 0:00:48.248 ********* 2026-03-24 03:34:05.959152 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:34:05.959165 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:34:05.959177 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:34:05.959190 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:34:05.959212 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:34:05.959224 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:34:05.959237 | orchestrator | 2026-03-24 03:34:05.959250 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-24 03:34:05.959266 | orchestrator | Tuesday 24 March 2026 03:33:58 +0000 (0:00:00.550) 0:00:48.799 ********* 2026-03-24 03:34:05.959285 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:34:05.959303 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:34:05.959322 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:34:05.959340 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:34:05.959356 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:34:05.959374 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:34:05.959393 | orchestrator | 2026-03-24 03:34:05.959411 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-24 03:34:05.959430 | orchestrator | Tuesday 24 March 2026 03:34:00 +0000 (0:00:01.678) 0:00:50.477 ********* 2026-03-24 03:34:05.959448 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:34:05.959464 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:34:05.959482 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:34:05.959499 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:34:05.959517 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:34:05.959534 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:34:05.959553 | orchestrator | 2026-03-24 03:34:05.959572 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-24 03:34:05.959590 | orchestrator | Tuesday 24 March 2026 03:34:01 +0000 (0:00:01.450) 0:00:51.928 ********* 2026-03-24 03:34:05.959609 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:34:05.959627 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 03:34:05.959645 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 03:34:05.959664 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:34:05.959682 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:34:05.959700 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:34:05.959719 | orchestrator | 2026-03-24 03:34:05.959737 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-24 03:34:05.959756 | orchestrator | Tuesday 24 March 2026 03:34:03 +0000 (0:00:01.422) 0:00:53.350 ********* 2026-03-24 03:34:05.959794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:05.959816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:05.959836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:05.959880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:06.812414 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:06.812503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:06.812536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:06.812546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:06.812553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:06.812560 | orchestrator | 2026-03-24 03:34:06.812568 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-24 03:34:06.812576 | orchestrator | Tuesday 24 March 2026 03:34:05 +0000 (0:00:02.552) 0:00:55.903 ********* 2026-03-24 03:34:06.812583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:34:06.812618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:34:06.812627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:34:06.812640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:34:06.812647 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:34:06.812654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:34:06.812661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:34:06.812667 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:34:06.812674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:34:06.812680 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:34:06.812687 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:34:06.812702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334370 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:34:10.334499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334520 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:34:10.334531 | orchestrator | 2026-03-24 03:34:10.334542 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-24 03:34:10.334554 | orchestrator | Tuesday 24 March 2026 03:34:06 +0000 (0:00:00.862) 0:00:56.765 ********* 2026-03-24 03:34:10.334565 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:34:10.334576 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:34:10.334585 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:34:10.334595 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:34:10.334604 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:34:10.334611 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:34:10.334617 | orchestrator | 2026-03-24 03:34:10.334623 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-24 03:34:10.334629 | orchestrator | Tuesday 24 March 2026 03:34:07 +0000 (0:00:00.752) 0:00:57.518 ********* 2026-03-24 03:34:10.334637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:34:10.334653 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:34:10.334659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:34:10.334714 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:34:10.334743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 03:34:10.334756 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:34:10.334762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334768 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:34:10.334774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334780 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:34:10.334786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-24 03:34:10.334806 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:34:10.334822 | orchestrator | 2026-03-24 03:34:10.334837 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-24 03:34:10.334847 | orchestrator | Tuesday 24 March 2026 03:34:08 +0000 (0:00:00.855) 0:00:58.373 ********* 2026-03-24 03:34:10.334865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:39.101794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:39.101917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:39.101933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:39.101941 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:39.101955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-24 03:34:39.101986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:39.102009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:39.102053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-24 03:34:39.102061 | orchestrator | 2026-03-24 03:34:39.102070 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-24 03:34:39.102078 | orchestrator | Tuesday 24 March 2026 03:34:10 +0000 (0:00:01.911) 0:01:00.284 ********* 2026-03-24 03:34:39.102084 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:34:39.102092 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:34:39.102099 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:34:39.102105 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:34:39.102110 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:34:39.102117 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:34:39.102123 | orchestrator | 2026-03-24 03:34:39.102130 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-24 03:34:39.102136 | orchestrator | Tuesday 24 March 2026 03:34:10 +0000 (0:00:00.611) 0:01:00.896 ********* 2026-03-24 03:34:39.102143 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:34:39.102149 | orchestrator | 2026-03-24 03:34:39.102155 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-24 03:34:39.102161 | orchestrator | Tuesday 24 March 2026 03:34:14 +0000 (0:00:04.025) 0:01:04.921 ********* 2026-03-24 03:34:39.102168 | orchestrator | 2026-03-24 03:34:39.102174 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-24 03:34:39.102180 | orchestrator | Tuesday 24 March 2026 03:34:15 +0000 (0:00:00.070) 0:01:04.992 ********* 2026-03-24 03:34:39.102186 | orchestrator | 2026-03-24 03:34:39.102199 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-24 03:34:39.102205 | orchestrator | Tuesday 24 March 2026 03:34:15 +0000 (0:00:00.069) 0:01:05.062 ********* 2026-03-24 03:34:39.102211 | orchestrator | 2026-03-24 03:34:39.102217 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-24 03:34:39.102224 | orchestrator | Tuesday 24 March 2026 03:34:15 +0000 (0:00:00.235) 0:01:05.297 ********* 2026-03-24 03:34:39.102231 | orchestrator | 2026-03-24 03:34:39.102238 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-24 03:34:39.102245 | orchestrator | Tuesday 24 March 2026 03:34:15 +0000 (0:00:00.071) 0:01:05.368 ********* 2026-03-24 03:34:39.102251 | orchestrator | 2026-03-24 03:34:39.102258 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-24 03:34:39.102264 | orchestrator | Tuesday 24 March 2026 03:34:15 +0000 (0:00:00.066) 0:01:05.435 ********* 2026-03-24 03:34:39.102270 | orchestrator | 2026-03-24 03:34:39.102288 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-24 03:34:39.102301 | orchestrator | Tuesday 24 March 2026 03:34:15 +0000 (0:00:00.068) 0:01:05.503 ********* 2026-03-24 03:34:39.102308 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:34:39.102314 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:34:39.102320 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:34:39.102326 | orchestrator | 2026-03-24 03:34:39.102333 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-24 03:34:39.102340 | orchestrator | Tuesday 24 March 2026 03:34:20 +0000 (0:00:05.057) 0:01:10.561 ********* 2026-03-24 03:34:39.102346 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:34:39.102353 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:34:39.102360 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:34:39.102367 | orchestrator | 2026-03-24 03:34:39.102374 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-24 03:34:39.102381 | orchestrator | Tuesday 24 March 2026 03:34:28 +0000 (0:00:07.611) 0:01:18.172 ********* 2026-03-24 03:34:39.102387 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:34:39.102393 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:34:39.102400 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:34:39.102406 | orchestrator | 2026-03-24 03:34:39.102413 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:34:39.102422 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-24 03:34:39.102431 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 03:34:39.102446 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 03:34:39.506542 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-24 03:34:39.506616 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-24 03:34:39.506624 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-24 03:34:39.506632 | orchestrator | 2026-03-24 03:34:39.506639 | orchestrator | 2026-03-24 03:34:39.506647 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:34:39.506655 | orchestrator | Tuesday 24 March 2026 03:34:39 +0000 (0:00:10.876) 0:01:29.049 ********* 2026-03-24 03:34:39.506662 | orchestrator | =============================================================================== 2026-03-24 03:34:39.506668 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 10.88s 2026-03-24 03:34:39.506701 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 7.61s 2026-03-24 03:34:39.506709 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.06s 2026-03-24 03:34:39.506716 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.86s 2026-03-24 03:34:39.506723 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.13s 2026-03-24 03:34:39.506730 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.03s 2026-03-24 03:34:39.506736 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.02s 2026-03-24 03:34:39.506744 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.83s 2026-03-24 03:34:39.506749 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.37s 2026-03-24 03:34:39.506753 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.55s 2026-03-24 03:34:39.506757 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.45s 2026-03-24 03:34:39.506761 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.22s 2026-03-24 03:34:39.506765 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.91s 2026-03-24 03:34:39.506769 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.68s 2026-03-24 03:34:39.506773 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.50s 2026-03-24 03:34:39.506777 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.49s 2026-03-24 03:34:39.506780 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.45s 2026-03-24 03:34:39.506784 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.42s 2026-03-24 03:34:39.506788 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.41s 2026-03-24 03:34:39.506791 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.27s 2026-03-24 03:34:41.810551 | orchestrator | 2026-03-24 03:34:41 | INFO  | Task 7d75c40f-dc99-4ab1-b156-6c23b7d36fda (aodh) was prepared for execution. 2026-03-24 03:34:41.810670 | orchestrator | 2026-03-24 03:34:41 | INFO  | It takes a moment until task 7d75c40f-dc99-4ab1-b156-6c23b7d36fda (aodh) has been started and output is visible here. 2026-03-24 03:35:14.191996 | orchestrator | 2026-03-24 03:35:14.192097 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:35:14.192115 | orchestrator | 2026-03-24 03:35:14.192129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:35:14.192142 | orchestrator | Tuesday 24 March 2026 03:34:45 +0000 (0:00:00.250) 0:00:00.250 ********* 2026-03-24 03:35:14.192155 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:35:14.192169 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:35:14.192182 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:35:14.192194 | orchestrator | 2026-03-24 03:35:14.192206 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:35:14.192217 | orchestrator | Tuesday 24 March 2026 03:34:46 +0000 (0:00:00.305) 0:00:00.556 ********* 2026-03-24 03:35:14.192235 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-24 03:35:14.192250 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-24 03:35:14.192262 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-24 03:35:14.192274 | orchestrator | 2026-03-24 03:35:14.192285 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-24 03:35:14.192297 | orchestrator | 2026-03-24 03:35:14.192309 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-24 03:35:14.192321 | orchestrator | Tuesday 24 March 2026 03:34:46 +0000 (0:00:00.393) 0:00:00.950 ********* 2026-03-24 03:35:14.192333 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:35:14.192347 | orchestrator | 2026-03-24 03:35:14.192387 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-24 03:35:14.192399 | orchestrator | Tuesday 24 March 2026 03:34:47 +0000 (0:00:00.649) 0:00:01.599 ********* 2026-03-24 03:35:14.192409 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-24 03:35:14.192421 | orchestrator | 2026-03-24 03:35:14.192432 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-24 03:35:14.192444 | orchestrator | Tuesday 24 March 2026 03:34:50 +0000 (0:00:03.510) 0:00:05.110 ********* 2026-03-24 03:35:14.192456 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-24 03:35:14.192469 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-24 03:35:14.192481 | orchestrator | 2026-03-24 03:35:14.192492 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-24 03:35:14.192505 | orchestrator | Tuesday 24 March 2026 03:34:57 +0000 (0:00:06.562) 0:00:11.673 ********* 2026-03-24 03:35:14.192517 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:35:14.192531 | orchestrator | 2026-03-24 03:35:14.192543 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-24 03:35:14.192556 | orchestrator | Tuesday 24 March 2026 03:35:00 +0000 (0:00:03.445) 0:00:15.118 ********* 2026-03-24 03:35:14.192568 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:35:14.192581 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-24 03:35:14.192593 | orchestrator | 2026-03-24 03:35:14.192604 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-24 03:35:14.192615 | orchestrator | Tuesday 24 March 2026 03:35:04 +0000 (0:00:04.187) 0:00:19.306 ********* 2026-03-24 03:35:14.192626 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:35:14.192637 | orchestrator | 2026-03-24 03:35:14.192648 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-24 03:35:14.192660 | orchestrator | Tuesday 24 March 2026 03:35:08 +0000 (0:00:03.398) 0:00:22.705 ********* 2026-03-24 03:35:14.192673 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-24 03:35:14.192684 | orchestrator | 2026-03-24 03:35:14.192696 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-24 03:35:14.192710 | orchestrator | Tuesday 24 March 2026 03:35:12 +0000 (0:00:03.950) 0:00:26.655 ********* 2026-03-24 03:35:14.192729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:14.192769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:14.192795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:14.192811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:14.192826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:14.192840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:14.192853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:14.192874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:15.231645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:15.231739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:15.231750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:15.231759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:15.231767 | orchestrator | 2026-03-24 03:35:15.231774 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-24 03:35:15.231780 | orchestrator | Tuesday 24 March 2026 03:35:14 +0000 (0:00:01.940) 0:00:28.595 ********* 2026-03-24 03:35:15.231785 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:35:15.231790 | orchestrator | 2026-03-24 03:35:15.231794 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-24 03:35:15.231798 | orchestrator | Tuesday 24 March 2026 03:35:14 +0000 (0:00:00.124) 0:00:28.720 ********* 2026-03-24 03:35:15.231802 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:35:15.231806 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:35:15.231810 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:35:15.231814 | orchestrator | 2026-03-24 03:35:15.231818 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-24 03:35:15.231822 | orchestrator | Tuesday 24 March 2026 03:35:14 +0000 (0:00:00.372) 0:00:29.092 ********* 2026-03-24 03:35:15.231827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:15.231862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:15.231867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:15.231871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:15.231910 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:35:15.231915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:15.231921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:15.231928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:15.231948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:19.731645 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:35:19.731763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:19.731784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:19.731801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:19.731815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:19.731829 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:35:19.731843 | orchestrator | 2026-03-24 03:35:19.731857 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-24 03:35:19.731930 | orchestrator | Tuesday 24 March 2026 03:35:15 +0000 (0:00:00.545) 0:00:29.638 ********* 2026-03-24 03:35:19.731972 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:35:19.731987 | orchestrator | 2026-03-24 03:35:19.732000 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-24 03:35:19.732013 | orchestrator | Tuesday 24 March 2026 03:35:15 +0000 (0:00:00.585) 0:00:30.224 ********* 2026-03-24 03:35:19.732027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:19.732063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:19.732078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:19.732092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:19.732105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:19.732127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:19.732139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:19.732160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:20.269582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:20.269666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:20.269676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:20.269684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:20.269713 | orchestrator | 2026-03-24 03:35:20.269723 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-24 03:35:20.269732 | orchestrator | Tuesday 24 March 2026 03:35:19 +0000 (0:00:03.911) 0:00:34.135 ********* 2026-03-24 03:35:20.269741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:20.269750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:20.269773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:20.269781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:20.269789 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:35:20.269798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:20.269811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:20.269818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:20.269826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:20.269833 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:35:20.269847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:21.117355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:21.117457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:21.117492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:21.117504 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:35:21.117516 | orchestrator | 2026-03-24 03:35:21.117527 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-24 03:35:21.117539 | orchestrator | Tuesday 24 March 2026 03:35:20 +0000 (0:00:00.540) 0:00:34.676 ********* 2026-03-24 03:35:21.117551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:21.117564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:21.117575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:21.117602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:21.117613 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:35:21.117632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:21.117643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:21.117654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:21.117665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:21.117675 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:35:21.117692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-24 03:35:25.048225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 03:35:25.048384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 03:35:25.048403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 03:35:25.048417 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:35:25.048431 | orchestrator | 2026-03-24 03:35:25.048443 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-24 03:35:25.048456 | orchestrator | Tuesday 24 March 2026 03:35:21 +0000 (0:00:00.848) 0:00:35.524 ********* 2026-03-24 03:35:25.048468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:25.048481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:25.048513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:25.048534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:25.048546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:25.048557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:25.048569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:25.048581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:25.048592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:25.048622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:32.973962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974104 | orchestrator | 2026-03-24 03:35:32.974112 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-24 03:35:32.974118 | orchestrator | Tuesday 24 March 2026 03:35:25 +0000 (0:00:03.926) 0:00:39.450 ********* 2026-03-24 03:35:32.974125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:32.974133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:32.974139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:32.974173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:32.974257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:37.999764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:37.999848 | orchestrator | 2026-03-24 03:35:37.999859 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-24 03:35:37.999908 | orchestrator | Tuesday 24 March 2026 03:35:32 +0000 (0:00:07.928) 0:00:47.378 ********* 2026-03-24 03:35:37.999917 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:35:37.999924 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:35:37.999931 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:35:37.999937 | orchestrator | 2026-03-24 03:35:37.999943 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-24 03:35:37.999950 | orchestrator | Tuesday 24 March 2026 03:35:34 +0000 (0:00:01.781) 0:00:49.160 ********* 2026-03-24 03:35:37.999957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:37.999966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:37.999994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-24 03:35:38.000013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:38.000021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:38.000028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-24 03:35:38.000035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:38.000041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:38.000053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:38.000059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:35:38.000071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:36:27.121373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-24 03:36:27.121492 | orchestrator | 2026-03-24 03:36:27.121510 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-24 03:36:27.121524 | orchestrator | Tuesday 24 March 2026 03:35:37 +0000 (0:00:03.237) 0:00:52.397 ********* 2026-03-24 03:36:27.121537 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:36:27.121557 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:36:27.121577 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:36:27.121597 | orchestrator | 2026-03-24 03:36:27.121616 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-24 03:36:27.121636 | orchestrator | Tuesday 24 March 2026 03:35:38 +0000 (0:00:00.291) 0:00:52.689 ********* 2026-03-24 03:36:27.121654 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:36:27.121674 | orchestrator | 2026-03-24 03:36:27.121693 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-24 03:36:27.121713 | orchestrator | Tuesday 24 March 2026 03:35:40 +0000 (0:00:02.142) 0:00:54.831 ********* 2026-03-24 03:36:27.121733 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:36:27.121786 | orchestrator | 2026-03-24 03:36:27.121798 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-24 03:36:27.121810 | orchestrator | Tuesday 24 March 2026 03:35:42 +0000 (0:00:02.350) 0:00:57.181 ********* 2026-03-24 03:36:27.121820 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:36:27.121831 | orchestrator | 2026-03-24 03:36:27.121842 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-24 03:36:27.121929 | orchestrator | Tuesday 24 March 2026 03:35:55 +0000 (0:00:12.963) 0:01:10.144 ********* 2026-03-24 03:36:27.121945 | orchestrator | 2026-03-24 03:36:27.121959 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-24 03:36:27.121971 | orchestrator | Tuesday 24 March 2026 03:35:55 +0000 (0:00:00.068) 0:01:10.212 ********* 2026-03-24 03:36:27.121983 | orchestrator | 2026-03-24 03:36:27.121995 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-24 03:36:27.122008 | orchestrator | Tuesday 24 March 2026 03:35:55 +0000 (0:00:00.068) 0:01:10.280 ********* 2026-03-24 03:36:27.122080 | orchestrator | 2026-03-24 03:36:27.122092 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-24 03:36:27.122103 | orchestrator | Tuesday 24 March 2026 03:35:56 +0000 (0:00:00.230) 0:01:10.511 ********* 2026-03-24 03:36:27.122115 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:36:27.122126 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:36:27.122137 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:36:27.122148 | orchestrator | 2026-03-24 03:36:27.122159 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-24 03:36:27.122170 | orchestrator | Tuesday 24 March 2026 03:36:01 +0000 (0:00:05.401) 0:01:15.912 ********* 2026-03-24 03:36:27.122181 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:36:27.122192 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:36:27.122203 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:36:27.122214 | orchestrator | 2026-03-24 03:36:27.122225 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-24 03:36:27.122236 | orchestrator | Tuesday 24 March 2026 03:36:11 +0000 (0:00:09.846) 0:01:25.758 ********* 2026-03-24 03:36:27.122247 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:36:27.122258 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:36:27.122268 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:36:27.122279 | orchestrator | 2026-03-24 03:36:27.122290 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-24 03:36:27.122301 | orchestrator | Tuesday 24 March 2026 03:36:21 +0000 (0:00:10.212) 0:01:35.971 ********* 2026-03-24 03:36:27.122312 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:36:27.122322 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:36:27.122333 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:36:27.122344 | orchestrator | 2026-03-24 03:36:27.122355 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:36:27.122367 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:36:27.122379 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:36:27.122390 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:36:27.122401 | orchestrator | 2026-03-24 03:36:27.122413 | orchestrator | 2026-03-24 03:36:27.122423 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:36:27.122435 | orchestrator | Tuesday 24 March 2026 03:36:26 +0000 (0:00:05.262) 0:01:41.234 ********* 2026-03-24 03:36:27.122445 | orchestrator | =============================================================================== 2026-03-24 03:36:27.122456 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.96s 2026-03-24 03:36:27.122467 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.21s 2026-03-24 03:36:27.122510 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 9.85s 2026-03-24 03:36:27.122533 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 7.93s 2026-03-24 03:36:27.122558 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.56s 2026-03-24 03:36:27.122574 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.40s 2026-03-24 03:36:27.122590 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.26s 2026-03-24 03:36:27.122607 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.19s 2026-03-24 03:36:27.122624 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.95s 2026-03-24 03:36:27.122641 | orchestrator | aodh : Copying over config.json files for services ---------------------- 3.93s 2026-03-24 03:36:27.122659 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 3.91s 2026-03-24 03:36:27.122676 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.51s 2026-03-24 03:36:27.122695 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.45s 2026-03-24 03:36:27.122713 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.40s 2026-03-24 03:36:27.122730 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.24s 2026-03-24 03:36:27.122748 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.35s 2026-03-24 03:36:27.122761 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.14s 2026-03-24 03:36:27.122770 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.94s 2026-03-24 03:36:27.122780 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.78s 2026-03-24 03:36:27.122790 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 0.85s 2026-03-24 03:36:29.297284 | orchestrator | 2026-03-24 03:36:29 | INFO  | Task ad0c489a-2e3f-41b4-a0da-59c639b2fdb3 (kolla-ceph-rgw) was prepared for execution. 2026-03-24 03:36:29.297354 | orchestrator | 2026-03-24 03:36:29 | INFO  | It takes a moment until task ad0c489a-2e3f-41b4-a0da-59c639b2fdb3 (kolla-ceph-rgw) has been started and output is visible here. 2026-03-24 03:37:00.611598 | orchestrator | 2026-03-24 03:37:00.611688 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:37:00.611697 | orchestrator | 2026-03-24 03:37:00.611720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:37:00.611727 | orchestrator | Tuesday 24 March 2026 03:36:32 +0000 (0:00:00.199) 0:00:00.199 ********* 2026-03-24 03:37:00.611734 | orchestrator | ok: [testbed-manager] 2026-03-24 03:37:00.611741 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:37:00.611747 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:37:00.611753 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:37:00.611759 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:37:00.611765 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:37:00.611783 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:37:00.611789 | orchestrator | 2026-03-24 03:37:00.611795 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:37:00.611801 | orchestrator | Tuesday 24 March 2026 03:36:33 +0000 (0:00:00.598) 0:00:00.798 ********* 2026-03-24 03:37:00.611808 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-24 03:37:00.611814 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-24 03:37:00.611820 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-24 03:37:00.611826 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-24 03:37:00.611832 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-24 03:37:00.611837 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-24 03:37:00.611921 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-24 03:37:00.611944 | orchestrator | 2026-03-24 03:37:00.611951 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-24 03:37:00.611957 | orchestrator | 2026-03-24 03:37:00.611963 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-24 03:37:00.611969 | orchestrator | Tuesday 24 March 2026 03:36:34 +0000 (0:00:00.530) 0:00:01.329 ********* 2026-03-24 03:37:00.611975 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:37:00.611982 | orchestrator | 2026-03-24 03:37:00.611988 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-24 03:37:00.611994 | orchestrator | Tuesday 24 March 2026 03:36:35 +0000 (0:00:01.049) 0:00:02.379 ********* 2026-03-24 03:37:00.612000 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-24 03:37:00.612006 | orchestrator | 2026-03-24 03:37:00.612012 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-24 03:37:00.612018 | orchestrator | Tuesday 24 March 2026 03:36:38 +0000 (0:00:03.362) 0:00:05.741 ********* 2026-03-24 03:37:00.612025 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-24 03:37:00.612032 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-24 03:37:00.612038 | orchestrator | 2026-03-24 03:37:00.612044 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-24 03:37:00.612050 | orchestrator | Tuesday 24 March 2026 03:36:43 +0000 (0:00:05.432) 0:00:11.174 ********* 2026-03-24 03:37:00.612056 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-24 03:37:00.612062 | orchestrator | 2026-03-24 03:37:00.612067 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-24 03:37:00.612073 | orchestrator | Tuesday 24 March 2026 03:36:46 +0000 (0:00:02.882) 0:00:14.057 ********* 2026-03-24 03:37:00.612079 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:37:00.612085 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-24 03:37:00.612091 | orchestrator | 2026-03-24 03:37:00.612096 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-24 03:37:00.612122 | orchestrator | Tuesday 24 March 2026 03:36:50 +0000 (0:00:03.486) 0:00:17.543 ********* 2026-03-24 03:37:00.612129 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-24 03:37:00.612135 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-24 03:37:00.612148 | orchestrator | 2026-03-24 03:37:00.612154 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-24 03:37:00.612160 | orchestrator | Tuesday 24 March 2026 03:36:55 +0000 (0:00:05.638) 0:00:23.182 ********* 2026-03-24 03:37:00.612166 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-24 03:37:00.612172 | orchestrator | 2026-03-24 03:37:00.612178 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:37:00.612183 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:00.612190 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:00.612196 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:00.612202 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:00.612208 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:00.612232 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:00.612239 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:00.612245 | orchestrator | 2026-03-24 03:37:00.612251 | orchestrator | 2026-03-24 03:37:00.612256 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:37:00.612262 | orchestrator | Tuesday 24 March 2026 03:37:00 +0000 (0:00:04.317) 0:00:27.499 ********* 2026-03-24 03:37:00.612268 | orchestrator | =============================================================================== 2026-03-24 03:37:00.612279 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.64s 2026-03-24 03:37:00.612285 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.43s 2026-03-24 03:37:00.612290 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.32s 2026-03-24 03:37:00.612296 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.49s 2026-03-24 03:37:00.612302 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.36s 2026-03-24 03:37:00.612308 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.88s 2026-03-24 03:37:00.612313 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.05s 2026-03-24 03:37:00.612319 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-03-24 03:37:00.612325 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-03-24 03:37:02.867512 | orchestrator | 2026-03-24 03:37:02 | INFO  | Task c51d4153-9e45-40e9-a678-24b9c4d4d368 (gnocchi) was prepared for execution. 2026-03-24 03:37:02.867584 | orchestrator | 2026-03-24 03:37:02 | INFO  | It takes a moment until task c51d4153-9e45-40e9-a678-24b9c4d4d368 (gnocchi) has been started and output is visible here. 2026-03-24 03:37:07.674722 | orchestrator | 2026-03-24 03:37:07.674818 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:37:07.674828 | orchestrator | 2026-03-24 03:37:07.674836 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:37:07.674926 | orchestrator | Tuesday 24 March 2026 03:37:06 +0000 (0:00:00.245) 0:00:00.245 ********* 2026-03-24 03:37:07.674935 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:37:07.674941 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:37:07.674945 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:37:07.674949 | orchestrator | 2026-03-24 03:37:07.674954 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:37:07.674959 | orchestrator | Tuesday 24 March 2026 03:37:07 +0000 (0:00:00.303) 0:00:00.549 ********* 2026-03-24 03:37:07.674963 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-24 03:37:07.674968 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-24 03:37:07.674973 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-24 03:37:07.674978 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-24 03:37:07.674982 | orchestrator | 2026-03-24 03:37:07.674986 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-24 03:37:07.674990 | orchestrator | skipping: no hosts matched 2026-03-24 03:37:07.674995 | orchestrator | 2026-03-24 03:37:07.674999 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:37:07.675004 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:07.675010 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:07.675014 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:37:07.675038 | orchestrator | 2026-03-24 03:37:07.675043 | orchestrator | 2026-03-24 03:37:07.675047 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:37:07.675051 | orchestrator | Tuesday 24 March 2026 03:37:07 +0000 (0:00:00.317) 0:00:00.866 ********* 2026-03-24 03:37:07.675055 | orchestrator | =============================================================================== 2026-03-24 03:37:07.675060 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2026-03-24 03:37:07.675064 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-03-24 03:37:09.864293 | orchestrator | 2026-03-24 03:37:09 | INFO  | Task fa4a655e-35eb-4f7a-bc2a-899d17567182 (manila) was prepared for execution. 2026-03-24 03:37:09.864371 | orchestrator | 2026-03-24 03:37:09 | INFO  | It takes a moment until task fa4a655e-35eb-4f7a-bc2a-899d17567182 (manila) has been started and output is visible here. 2026-03-24 03:37:51.536984 | orchestrator | 2026-03-24 03:37:51.537099 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:37:51.537110 | orchestrator | 2026-03-24 03:37:51.537117 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:37:51.537123 | orchestrator | Tuesday 24 March 2026 03:37:13 +0000 (0:00:00.189) 0:00:00.189 ********* 2026-03-24 03:37:51.537129 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:37:51.537136 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:37:51.537141 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:37:51.537146 | orchestrator | 2026-03-24 03:37:51.537152 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:37:51.537157 | orchestrator | Tuesday 24 March 2026 03:37:13 +0000 (0:00:00.232) 0:00:00.422 ********* 2026-03-24 03:37:51.537163 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-24 03:37:51.537169 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-24 03:37:51.537174 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-24 03:37:51.537179 | orchestrator | 2026-03-24 03:37:51.537185 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-24 03:37:51.537190 | orchestrator | 2026-03-24 03:37:51.537195 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-24 03:37:51.537200 | orchestrator | Tuesday 24 March 2026 03:37:14 +0000 (0:00:00.323) 0:00:00.745 ********* 2026-03-24 03:37:51.537218 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:37:51.537224 | orchestrator | 2026-03-24 03:37:51.537229 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-24 03:37:51.537235 | orchestrator | Tuesday 24 March 2026 03:37:14 +0000 (0:00:00.481) 0:00:01.227 ********* 2026-03-24 03:37:51.537240 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:37:51.537246 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:37:51.537251 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:37:51.537256 | orchestrator | 2026-03-24 03:37:51.537261 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-24 03:37:51.537266 | orchestrator | Tuesday 24 March 2026 03:37:14 +0000 (0:00:00.336) 0:00:01.564 ********* 2026-03-24 03:37:51.537271 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-24 03:37:51.537277 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-24 03:37:51.537282 | orchestrator | 2026-03-24 03:37:51.537288 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-24 03:37:51.537293 | orchestrator | Tuesday 24 March 2026 03:37:21 +0000 (0:00:06.624) 0:00:08.188 ********* 2026-03-24 03:37:51.537298 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-24 03:37:51.537304 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-24 03:37:51.537342 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-24 03:37:51.537348 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-24 03:37:51.537353 | orchestrator | 2026-03-24 03:37:51.537358 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-24 03:37:51.537363 | orchestrator | Tuesday 24 March 2026 03:37:34 +0000 (0:00:13.221) 0:00:21.409 ********* 2026-03-24 03:37:51.537368 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:37:51.537373 | orchestrator | 2026-03-24 03:37:51.537378 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-24 03:37:51.537383 | orchestrator | Tuesday 24 March 2026 03:37:38 +0000 (0:00:03.367) 0:00:24.777 ********* 2026-03-24 03:37:51.537388 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:37:51.537393 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-24 03:37:51.537398 | orchestrator | 2026-03-24 03:37:51.537403 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-24 03:37:51.537411 | orchestrator | Tuesday 24 March 2026 03:37:42 +0000 (0:00:03.998) 0:00:28.775 ********* 2026-03-24 03:37:51.537420 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:37:51.537429 | orchestrator | 2026-03-24 03:37:51.537437 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-24 03:37:51.537445 | orchestrator | Tuesday 24 March 2026 03:37:45 +0000 (0:00:03.361) 0:00:32.137 ********* 2026-03-24 03:37:51.537453 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-24 03:37:51.537461 | orchestrator | 2026-03-24 03:37:51.537470 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-24 03:37:51.537479 | orchestrator | Tuesday 24 March 2026 03:37:49 +0000 (0:00:03.791) 0:00:35.928 ********* 2026-03-24 03:37:51.537508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:37:51.537520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:37:51.537536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:37:51.537554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:37:51.537564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:37:51.537574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:37:51.537590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:01.923710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:01.923879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:01.923921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:01.923929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:01.923936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:01.923943 | orchestrator | 2026-03-24 03:38:01.923951 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-24 03:38:01.923959 | orchestrator | Tuesday 24 March 2026 03:37:51 +0000 (0:00:02.307) 0:00:38.236 ********* 2026-03-24 03:38:01.923969 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:38:01.923976 | orchestrator | 2026-03-24 03:38:01.923982 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-24 03:38:01.923988 | orchestrator | Tuesday 24 March 2026 03:37:52 +0000 (0:00:00.513) 0:00:38.749 ********* 2026-03-24 03:38:01.923994 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:38:01.924001 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:38:01.924006 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:38:01.924012 | orchestrator | 2026-03-24 03:38:01.924019 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-24 03:38:01.924024 | orchestrator | Tuesday 24 March 2026 03:37:53 +0000 (0:00:01.029) 0:00:39.779 ********* 2026-03-24 03:38:01.924031 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-24 03:38:01.924054 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-24 03:38:01.924061 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-24 03:38:01.924074 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-24 03:38:01.924080 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-24 03:38:01.924092 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-24 03:38:01.924096 | orchestrator | 2026-03-24 03:38:01.924100 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-24 03:38:01.924103 | orchestrator | Tuesday 24 March 2026 03:37:55 +0000 (0:00:01.880) 0:00:41.659 ********* 2026-03-24 03:38:01.924107 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-24 03:38:01.924111 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-24 03:38:01.924115 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-24 03:38:01.924118 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-24 03:38:01.924122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-24 03:38:01.924126 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-24 03:38:01.924129 | orchestrator | 2026-03-24 03:38:01.924133 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-24 03:38:01.924137 | orchestrator | Tuesday 24 March 2026 03:37:56 +0000 (0:00:01.280) 0:00:42.940 ********* 2026-03-24 03:38:01.924142 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-24 03:38:01.924146 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-24 03:38:01.924149 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-24 03:38:01.924153 | orchestrator | 2026-03-24 03:38:01.924157 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-24 03:38:01.924161 | orchestrator | Tuesday 24 March 2026 03:37:56 +0000 (0:00:00.654) 0:00:43.594 ********* 2026-03-24 03:38:01.924164 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:38:01.924168 | orchestrator | 2026-03-24 03:38:01.924172 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-24 03:38:01.924176 | orchestrator | Tuesday 24 March 2026 03:37:57 +0000 (0:00:00.123) 0:00:43.718 ********* 2026-03-24 03:38:01.924179 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:38:01.924183 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:38:01.924187 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:38:01.924190 | orchestrator | 2026-03-24 03:38:01.924194 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-24 03:38:01.924198 | orchestrator | Tuesday 24 March 2026 03:37:57 +0000 (0:00:00.457) 0:00:44.176 ********* 2026-03-24 03:38:01.924202 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:38:01.924206 | orchestrator | 2026-03-24 03:38:01.924210 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-24 03:38:01.924213 | orchestrator | Tuesday 24 March 2026 03:37:58 +0000 (0:00:00.532) 0:00:44.708 ********* 2026-03-24 03:38:01.924225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:02.799956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:02.800062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:02.800079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:02.800205 | orchestrator | 2026-03-24 03:38:02.800213 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-24 03:38:02.800220 | orchestrator | Tuesday 24 March 2026 03:38:02 +0000 (0:00:03.922) 0:00:48.630 ********* 2026-03-24 03:38:02.800233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:03.423222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423325 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:38:03.423334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:03.423361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423401 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:38:03.423408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:03.423415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:03.423441 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:38:03.423448 | orchestrator | 2026-03-24 03:38:03.423456 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-24 03:38:03.423464 | orchestrator | Tuesday 24 March 2026 03:38:02 +0000 (0:00:00.881) 0:00:49.512 ********* 2026-03-24 03:38:03.423480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:07.895108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895214 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:38:07.895221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:07.895227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895261 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:38:07.895265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:07.895274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:07.895286 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:38:07.895290 | orchestrator | 2026-03-24 03:38:07.895295 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-24 03:38:07.895301 | orchestrator | Tuesday 24 March 2026 03:38:03 +0000 (0:00:00.840) 0:00:50.352 ********* 2026-03-24 03:38:07.895312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:14.218289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:14.218394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:14.218405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:14.218496 | orchestrator | 2026-03-24 03:38:14.218504 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-24 03:38:14.218512 | orchestrator | Tuesday 24 March 2026 03:38:08 +0000 (0:00:04.432) 0:00:54.785 ********* 2026-03-24 03:38:14.218528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:18.137100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:18.137200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:38:18.137209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:18.137216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:18.137233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:18.137250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:18.137260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:18.137297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:18.137304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:18.137309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:18.137314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:38:18.137319 | orchestrator | 2026-03-24 03:38:18.137325 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-24 03:38:18.137332 | orchestrator | Tuesday 24 March 2026 03:38:14 +0000 (0:00:06.134) 0:01:00.919 ********* 2026-03-24 03:38:18.137338 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-24 03:38:18.137346 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-24 03:38:18.137351 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-24 03:38:18.137356 | orchestrator | 2026-03-24 03:38:18.137361 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-24 03:38:18.137369 | orchestrator | Tuesday 24 March 2026 03:38:17 +0000 (0:00:03.360) 0:01:04.279 ********* 2026-03-24 03:38:18.137380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:21.251167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251306 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:38:21.251317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:21.251340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251399 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:38:21.251407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-24 03:38:21.251416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 03:38:21.251455 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:38:21.251463 | orchestrator | 2026-03-24 03:38:21.251473 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-24 03:38:21.251482 | orchestrator | Tuesday 24 March 2026 03:38:18 +0000 (0:00:00.561) 0:01:04.841 ********* 2026-03-24 03:38:21.251498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:39:01.175982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:39:01.176128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-24 03:39:01.176141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-24 03:39:01.176352 | orchestrator | 2026-03-24 03:39:01.176366 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-24 03:39:01.176380 | orchestrator | Tuesday 24 March 2026 03:38:21 +0000 (0:00:03.115) 0:01:07.956 ********* 2026-03-24 03:39:01.176392 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:39:01.176406 | orchestrator | 2026-03-24 03:39:01.176419 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-24 03:39:01.176432 | orchestrator | Tuesday 24 March 2026 03:38:23 +0000 (0:00:02.220) 0:01:10.177 ********* 2026-03-24 03:39:01.176443 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:39:01.176455 | orchestrator | 2026-03-24 03:39:01.176468 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-24 03:39:01.176483 | orchestrator | Tuesday 24 March 2026 03:38:25 +0000 (0:00:02.256) 0:01:12.433 ********* 2026-03-24 03:39:01.176495 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:39:01.176508 | orchestrator | 2026-03-24 03:39:01.176520 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-24 03:39:01.176531 | orchestrator | Tuesday 24 March 2026 03:39:00 +0000 (0:00:35.107) 0:01:47.541 ********* 2026-03-24 03:39:01.176551 | orchestrator | 2026-03-24 03:39:01.176575 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-24 03:39:48.367279 | orchestrator | Tuesday 24 March 2026 03:39:01 +0000 (0:00:00.084) 0:01:47.626 ********* 2026-03-24 03:39:48.367375 | orchestrator | 2026-03-24 03:39:48.367385 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-24 03:39:48.367392 | orchestrator | Tuesday 24 March 2026 03:39:01 +0000 (0:00:00.069) 0:01:47.695 ********* 2026-03-24 03:39:48.367398 | orchestrator | 2026-03-24 03:39:48.367404 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-24 03:39:48.367411 | orchestrator | Tuesday 24 March 2026 03:39:01 +0000 (0:00:00.082) 0:01:47.777 ********* 2026-03-24 03:39:48.367417 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:39:48.367424 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:39:48.367430 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:39:48.367436 | orchestrator | 2026-03-24 03:39:48.367442 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-24 03:39:48.367447 | orchestrator | Tuesday 24 March 2026 03:39:15 +0000 (0:00:14.531) 0:02:02.309 ********* 2026-03-24 03:39:48.367453 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:39:48.367459 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:39:48.367465 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:39:48.367471 | orchestrator | 2026-03-24 03:39:48.367476 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-24 03:39:48.367504 | orchestrator | Tuesday 24 March 2026 03:39:26 +0000 (0:00:10.350) 0:02:12.659 ********* 2026-03-24 03:39:48.367510 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:39:48.367516 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:39:48.367532 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:39:48.367538 | orchestrator | 2026-03-24 03:39:48.367544 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-24 03:39:48.367549 | orchestrator | Tuesday 24 March 2026 03:39:31 +0000 (0:00:05.004) 0:02:17.664 ********* 2026-03-24 03:39:48.367555 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:39:48.367568 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:39:48.367574 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:39:48.367579 | orchestrator | 2026-03-24 03:39:48.367585 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:39:48.367592 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:39:48.367599 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:39:48.367605 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:39:48.367611 | orchestrator | 2026-03-24 03:39:48.367617 | orchestrator | 2026-03-24 03:39:48.367623 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:39:48.367628 | orchestrator | Tuesday 24 March 2026 03:39:47 +0000 (0:00:16.941) 0:02:34.606 ********* 2026-03-24 03:39:48.367634 | orchestrator | =============================================================================== 2026-03-24 03:39:48.367640 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 35.11s 2026-03-24 03:39:48.367646 | orchestrator | manila : Restart manila-share container -------------------------------- 16.94s 2026-03-24 03:39:48.367651 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.53s 2026-03-24 03:39:48.367657 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.22s 2026-03-24 03:39:48.367663 | orchestrator | manila : Restart manila-data container --------------------------------- 10.35s 2026-03-24 03:39:48.367679 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.62s 2026-03-24 03:39:48.367685 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.13s 2026-03-24 03:39:48.367691 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 5.00s 2026-03-24 03:39:48.367697 | orchestrator | manila : Copying over config.json files for services -------------------- 4.43s 2026-03-24 03:39:48.367702 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.00s 2026-03-24 03:39:48.367708 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.92s 2026-03-24 03:39:48.367714 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.79s 2026-03-24 03:39:48.367720 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.37s 2026-03-24 03:39:48.367725 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.36s 2026-03-24 03:39:48.367731 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.36s 2026-03-24 03:39:48.367737 | orchestrator | manila : Check manila containers ---------------------------------------- 3.12s 2026-03-24 03:39:48.367743 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.31s 2026-03-24 03:39:48.367748 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.26s 2026-03-24 03:39:48.367754 | orchestrator | manila : Creating Manila database --------------------------------------- 2.22s 2026-03-24 03:39:48.367760 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.88s 2026-03-24 03:39:48.708864 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-24 03:40:00.802845 | orchestrator | 2026-03-24 03:40:00 | INFO  | Task d29bd461-6dc0-4b15-a743-924e03e42f45 (netdata) was prepared for execution. 2026-03-24 03:40:00.802957 | orchestrator | 2026-03-24 03:40:00 | INFO  | It takes a moment until task d29bd461-6dc0-4b15-a743-924e03e42f45 (netdata) has been started and output is visible here. 2026-03-24 03:41:33.219159 | orchestrator | 2026-03-24 03:41:33.219269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:41:33.219284 | orchestrator | 2026-03-24 03:41:33.219292 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:41:33.219300 | orchestrator | Tuesday 24 March 2026 03:40:04 +0000 (0:00:00.224) 0:00:00.224 ********* 2026-03-24 03:41:33.219307 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-24 03:41:33.219314 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-24 03:41:33.219321 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-24 03:41:33.219327 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-24 03:41:33.219333 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-24 03:41:33.219339 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-24 03:41:33.219345 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-24 03:41:33.219351 | orchestrator | 2026-03-24 03:41:33.219357 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-24 03:41:33.219363 | orchestrator | 2026-03-24 03:41:33.219370 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-24 03:41:33.219376 | orchestrator | Tuesday 24 March 2026 03:40:05 +0000 (0:00:00.808) 0:00:01.032 ********* 2026-03-24 03:41:33.219385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:41:33.219393 | orchestrator | 2026-03-24 03:41:33.219399 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-24 03:41:33.219405 | orchestrator | Tuesday 24 March 2026 03:40:06 +0000 (0:00:01.190) 0:00:02.223 ********* 2026-03-24 03:41:33.219412 | orchestrator | ok: [testbed-manager] 2026-03-24 03:41:33.219419 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:41:33.219426 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:41:33.219432 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:41:33.219439 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:41:33.219445 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:41:33.219451 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:41:33.219457 | orchestrator | 2026-03-24 03:41:33.219463 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-24 03:41:33.219470 | orchestrator | Tuesday 24 March 2026 03:40:08 +0000 (0:00:01.863) 0:00:04.087 ********* 2026-03-24 03:41:33.219476 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:41:33.219483 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:41:33.219489 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:41:33.219495 | orchestrator | ok: [testbed-manager] 2026-03-24 03:41:33.219501 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:41:33.219507 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:41:33.219514 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:41:33.219520 | orchestrator | 2026-03-24 03:41:33.219527 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-24 03:41:33.219533 | orchestrator | Tuesday 24 March 2026 03:40:11 +0000 (0:00:02.255) 0:00:06.343 ********* 2026-03-24 03:41:33.219540 | orchestrator | changed: [testbed-manager] 2026-03-24 03:41:33.219547 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:41:33.219553 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:41:33.219560 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:41:33.219566 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:41:33.219598 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:41:33.219605 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:41:33.219611 | orchestrator | 2026-03-24 03:41:33.219618 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-24 03:41:33.219640 | orchestrator | Tuesday 24 March 2026 03:40:12 +0000 (0:00:01.509) 0:00:07.853 ********* 2026-03-24 03:41:33.219647 | orchestrator | changed: [testbed-manager] 2026-03-24 03:41:33.219654 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:41:33.219660 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:41:33.219667 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:41:33.219674 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:41:33.219681 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:41:33.219687 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:41:33.219693 | orchestrator | 2026-03-24 03:41:33.219699 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-24 03:41:33.219705 | orchestrator | Tuesday 24 March 2026 03:40:27 +0000 (0:00:15.243) 0:00:23.096 ********* 2026-03-24 03:41:33.219711 | orchestrator | changed: [testbed-manager] 2026-03-24 03:41:33.219718 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:41:33.219788 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:41:33.219797 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:41:33.219805 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:41:33.219813 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:41:33.219820 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:41:33.219829 | orchestrator | 2026-03-24 03:41:33.219837 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-24 03:41:33.219845 | orchestrator | Tuesday 24 March 2026 03:41:08 +0000 (0:00:41.074) 0:01:04.171 ********* 2026-03-24 03:41:33.219854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:41:33.219864 | orchestrator | 2026-03-24 03:41:33.219874 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-24 03:41:33.219883 | orchestrator | Tuesday 24 March 2026 03:41:10 +0000 (0:00:01.462) 0:01:05.634 ********* 2026-03-24 03:41:33.219892 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-24 03:41:33.219901 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-24 03:41:33.219910 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-24 03:41:33.219919 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-24 03:41:33.219946 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-24 03:41:33.219955 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-24 03:41:33.219964 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-24 03:41:33.219973 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-24 03:41:33.219981 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-24 03:41:33.219989 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-24 03:41:33.219998 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-24 03:41:33.220005 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-24 03:41:33.220015 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-24 03:41:33.220025 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-24 03:41:33.220035 | orchestrator | 2026-03-24 03:41:33.220045 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-24 03:41:33.220054 | orchestrator | Tuesday 24 March 2026 03:41:13 +0000 (0:00:03.529) 0:01:09.164 ********* 2026-03-24 03:41:33.220061 | orchestrator | ok: [testbed-manager] 2026-03-24 03:41:33.220068 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:41:33.220075 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:41:33.220082 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:41:33.220099 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:41:33.220106 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:41:33.220113 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:41:33.220120 | orchestrator | 2026-03-24 03:41:33.220127 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-24 03:41:33.220134 | orchestrator | Tuesday 24 March 2026 03:41:15 +0000 (0:00:01.293) 0:01:10.458 ********* 2026-03-24 03:41:33.220140 | orchestrator | changed: [testbed-manager] 2026-03-24 03:41:33.220147 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:41:33.220153 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:41:33.220160 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:41:33.220166 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:41:33.220173 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:41:33.220179 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:41:33.220186 | orchestrator | 2026-03-24 03:41:33.220192 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-24 03:41:33.220199 | orchestrator | Tuesday 24 March 2026 03:41:16 +0000 (0:00:01.249) 0:01:11.707 ********* 2026-03-24 03:41:33.220205 | orchestrator | ok: [testbed-manager] 2026-03-24 03:41:33.220211 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:41:33.220217 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:41:33.220223 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:41:33.220229 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:41:33.220235 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:41:33.220240 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:41:33.220246 | orchestrator | 2026-03-24 03:41:33.220253 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-24 03:41:33.220260 | orchestrator | Tuesday 24 March 2026 03:41:17 +0000 (0:00:01.134) 0:01:12.842 ********* 2026-03-24 03:41:33.220266 | orchestrator | ok: [testbed-manager] 2026-03-24 03:41:33.220272 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:41:33.220278 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:41:33.220284 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:41:33.220290 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:41:33.220296 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:41:33.220302 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:41:33.220308 | orchestrator | 2026-03-24 03:41:33.220314 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-24 03:41:33.220321 | orchestrator | Tuesday 24 March 2026 03:41:19 +0000 (0:00:01.511) 0:01:14.354 ********* 2026-03-24 03:41:33.220327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-24 03:41:33.220342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:41:33.220350 | orchestrator | 2026-03-24 03:41:33.220356 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-24 03:41:33.220362 | orchestrator | Tuesday 24 March 2026 03:41:20 +0000 (0:00:01.163) 0:01:15.518 ********* 2026-03-24 03:41:33.220368 | orchestrator | changed: [testbed-manager] 2026-03-24 03:41:33.220375 | orchestrator | 2026-03-24 03:41:33.220381 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-24 03:41:33.220387 | orchestrator | Tuesday 24 March 2026 03:41:22 +0000 (0:00:01.954) 0:01:17.472 ********* 2026-03-24 03:41:33.220393 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:41:33.220400 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:41:33.220406 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:41:33.220412 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:41:33.220418 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:41:33.220424 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:41:33.220430 | orchestrator | changed: [testbed-manager] 2026-03-24 03:41:33.220436 | orchestrator | 2026-03-24 03:41:33.220443 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:41:33.220456 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:41:33.220463 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:41:33.220469 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:41:33.220475 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:41:33.220488 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:41:33.603195 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:41:33.603310 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:41:33.603344 | orchestrator | 2026-03-24 03:41:33.603366 | orchestrator | 2026-03-24 03:41:33.603385 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:41:33.603405 | orchestrator | Tuesday 24 March 2026 03:41:33 +0000 (0:00:10.993) 0:01:28.465 ********* 2026-03-24 03:41:33.603423 | orchestrator | =============================================================================== 2026-03-24 03:41:33.603441 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.07s 2026-03-24 03:41:33.603458 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.24s 2026-03-24 03:41:33.603476 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 10.99s 2026-03-24 03:41:33.603494 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.53s 2026-03-24 03:41:33.603512 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.26s 2026-03-24 03:41:33.603532 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.95s 2026-03-24 03:41:33.603549 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.86s 2026-03-24 03:41:33.603568 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.51s 2026-03-24 03:41:33.603586 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.51s 2026-03-24 03:41:33.603604 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.46s 2026-03-24 03:41:33.603615 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.29s 2026-03-24 03:41:33.603627 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.25s 2026-03-24 03:41:33.603638 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.19s 2026-03-24 03:41:33.603650 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.16s 2026-03-24 03:41:33.603660 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.13s 2026-03-24 03:41:33.603671 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-03-24 03:41:35.869002 | orchestrator | 2026-03-24 03:41:35 | INFO  | Task 69e7e35a-ddfa-4b56-9e46-de49dee70a6d (prometheus) was prepared for execution. 2026-03-24 03:41:35.869089 | orchestrator | 2026-03-24 03:41:35 | INFO  | It takes a moment until task 69e7e35a-ddfa-4b56-9e46-de49dee70a6d (prometheus) has been started and output is visible here. 2026-03-24 03:41:44.269070 | orchestrator | 2026-03-24 03:41:44.269173 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:41:44.269181 | orchestrator | 2026-03-24 03:41:44.269186 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:41:44.269206 | orchestrator | Tuesday 24 March 2026 03:41:39 +0000 (0:00:00.265) 0:00:00.265 ********* 2026-03-24 03:41:44.269210 | orchestrator | ok: [testbed-manager] 2026-03-24 03:41:44.269215 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:41:44.269229 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:41:44.269233 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:41:44.269237 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:41:44.269241 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:41:44.269245 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:41:44.269249 | orchestrator | 2026-03-24 03:41:44.269253 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:41:44.269257 | orchestrator | Tuesday 24 March 2026 03:41:40 +0000 (0:00:00.825) 0:00:01.091 ********* 2026-03-24 03:41:44.269261 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-24 03:41:44.269265 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-24 03:41:44.269269 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-24 03:41:44.269273 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-24 03:41:44.269276 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-24 03:41:44.269280 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-24 03:41:44.269284 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-24 03:41:44.269287 | orchestrator | 2026-03-24 03:41:44.269291 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-24 03:41:44.269295 | orchestrator | 2026-03-24 03:41:44.269298 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-24 03:41:44.269302 | orchestrator | Tuesday 24 March 2026 03:41:41 +0000 (0:00:00.761) 0:00:01.853 ********* 2026-03-24 03:41:44.269306 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:41:44.269311 | orchestrator | 2026-03-24 03:41:44.269315 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-24 03:41:44.269319 | orchestrator | Tuesday 24 March 2026 03:41:42 +0000 (0:00:01.169) 0:00:03.022 ********* 2026-03-24 03:41:44.269325 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-24 03:41:44.269333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:44.269338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:44.269346 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:44.269365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:44.269370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:44.269374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:44.269378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:44.269382 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:44.269388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:44.269391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:44.269402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:45.370879 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-24 03:41:45.370989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:45.371000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:45.371015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:45.371083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371095 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:45.371103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:41:45.371156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:49.685488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:49.685577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:49.685590 | orchestrator | 2026-03-24 03:41:49.685599 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-24 03:41:49.685609 | orchestrator | Tuesday 24 March 2026 03:41:45 +0000 (0:00:02.693) 0:00:05.716 ********* 2026-03-24 03:41:49.685618 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 03:41:49.685628 | orchestrator | 2026-03-24 03:41:49.685636 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-24 03:41:49.685643 | orchestrator | Tuesday 24 March 2026 03:41:46 +0000 (0:00:01.369) 0:00:07.085 ********* 2026-03-24 03:41:49.685653 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-24 03:41:49.685692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:49.685701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:49.685710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:49.685794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:49.685802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:49.685808 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:49.685813 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:49.685824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:49.685832 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:49.685837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:49.685850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:52.344140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:52.344155 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-24 03:41:52.344210 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:52.344246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:52.344252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:52.344258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344270 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:52.344276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:52.344298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:53.423218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:53.423308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:53.423343 | orchestrator | 2026-03-24 03:41:53.423353 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-24 03:41:53.423363 | orchestrator | Tuesday 24 March 2026 03:41:52 +0000 (0:00:05.601) 0:00:12.687 ********* 2026-03-24 03:41:53.423373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:53.423383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.423392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.423402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.423452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.423503 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-24 03:41:53.423513 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:53.423530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.423540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-24 03:41:53.423550 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.423558 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:41:53.423567 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:41:53.423580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:53.423595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.705316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.705424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.705440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.705474 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:41:53.705491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:53.705503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.705515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.705542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.705575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:53.705607 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:41:53.705620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:53.705633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.705646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.705658 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:41:53.705670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:53.705682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.705695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 03:41:53.705708 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:41:53.705817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:53.705853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:54.442387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 03:41:54.442506 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:41:54.442524 | orchestrator | 2026-03-24 03:41:54.442565 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-24 03:41:54.442578 | orchestrator | Tuesday 24 March 2026 03:41:53 +0000 (0:00:01.370) 0:00:14.058 ********* 2026-03-24 03:41:54.442590 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-24 03:41:54.442602 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:54.442615 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:54.442645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-24 03:41:54.442696 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:54.442709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:54.442740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:54.442747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:54.442753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:54.442760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:54.442771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:54.442783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:54.442796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:55.558152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:55.558239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:55.558247 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:41:55.558253 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:41:55.558257 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:41:55.558262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:55.558268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:55.558276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:55.558321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:55.558329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 03:41:55.558336 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:41:55.558359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:55.558367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:55.558373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 03:41:55.558380 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:41:55.558387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:55.558394 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:55.558408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 03:41:55.558413 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:41:55.558417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 03:41:55.558427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 03:41:58.741028 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 03:41:58.741107 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:41:58.741118 | orchestrator | 2026-03-24 03:41:58.741127 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-24 03:41:58.741132 | orchestrator | Tuesday 24 March 2026 03:41:55 +0000 (0:00:01.845) 0:00:15.903 ********* 2026-03-24 03:41:58.741138 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-24 03:41:58.741144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:58.741168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:58.741183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:58.741187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:58.741201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:58.741205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:58.741209 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:41:58.741213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:58.741220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:58.741224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:41:58.741231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:58.741237 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:41:58.741244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.086667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.086857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:01.086899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:01.086911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:01.086937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.086952 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-24 03:42:01.086983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.086996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.087007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.087026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.087036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:01.087052 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:01.087062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:01.087073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:01.087091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:04.312912 | orchestrator | 2026-03-24 03:42:04.312983 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-24 03:42:04.312990 | orchestrator | Tuesday 24 March 2026 03:42:01 +0000 (0:00:05.523) 0:00:21.426 ********* 2026-03-24 03:42:04.312994 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 03:42:04.313000 | orchestrator | 2026-03-24 03:42:04.313004 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-24 03:42:04.313023 | orchestrator | Tuesday 24 March 2026 03:42:01 +0000 (0:00:00.763) 0:00:22.190 ********* 2026-03-24 03:42:04.313029 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072411, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7552214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313035 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072411, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7552214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313039 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072411, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7552214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313054 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072434, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.76102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313059 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072411, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7552214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:04.313063 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072434, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.76102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313078 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072411, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7552214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313086 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072411, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7552214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313090 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072434, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.76102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313094 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072411, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7552214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313101 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072406, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313106 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072406, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313109 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072434, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.76102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:04.313118 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072434, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.76102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.691990 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072434, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.76102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692083 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072425, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7589974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692092 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072406, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692111 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072434, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.76102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:05.692117 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072406, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692123 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072425, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7589974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692181 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072406, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692202 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072425, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7589974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692209 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072404, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7513752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692215 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072406, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692224 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072404, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7513752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692230 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072425, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7589974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692236 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072425, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7589974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692248 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072404, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7513752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:05.692261 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072425, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7589974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915084 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072415, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.755855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915219 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072415, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.755855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915254 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072415, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.755855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072423, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7581706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915280 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072404, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7513752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915311 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072404, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7513752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915323 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072404, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7513752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915354 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072423, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7581706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915366 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072418, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7563381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915382 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072406, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:06.915394 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072418, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7563381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915405 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072423, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7581706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915425 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072410, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915437 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072410, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:06.915456 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072415, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.755855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.077866 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072433, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.077972 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072415, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.755855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.077985 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072415, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.755855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078011 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072433, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078064 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072418, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7563381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078072 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072399, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7500083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078081 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072399, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7500083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078102 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072423, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7581706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078115 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072423, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7581706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078123 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072425, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7589974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:08.078136 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072410, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078144 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072423, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7581706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078151 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072446, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.763411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072446, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.763411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:08.078170 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072418, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7563381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232634 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072418, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7563381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232784 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072430, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232819 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072433, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232827 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072418, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7563381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232835 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072430, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232842 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072410, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232850 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072410, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232896 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072433, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232921 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072405, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.751904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232932 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072433, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232943 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072405, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.751904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232955 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072410, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232966 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072399, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7500083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.232978 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072399, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7500083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:09.233002 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072404, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7513752, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:10.351681 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072402, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.750488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351786 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072399, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7500083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351793 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072446, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.763411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351798 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072402, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.750488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351802 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072433, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351805 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072446, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.763411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351809 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072430, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351849 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072422, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351854 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072422, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351858 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072405, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.751904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351862 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072446, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.763411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351866 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072399, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7500083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351871 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072430, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351875 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072402, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.750488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:10.351924 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072420, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378562 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072415, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.755855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:11.378642 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072420, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378652 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072430, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378658 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072405, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.751904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378663 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072445, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7626724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378686 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:11.378693 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072446, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.763411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378748 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072422, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378768 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072405, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.751904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378774 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072402, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.750488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378779 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072445, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7626724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378784 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:11.378790 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072422, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378796 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072430, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378806 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072420, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378815 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072402, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.750488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:11.378824 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072420, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296152 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072445, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7626724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296235 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:17.296243 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072405, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.751904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296248 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072422, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296253 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072402, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.750488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296272 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072423, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7581706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:17.296286 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072445, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7626724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296290 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:17.296303 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072420, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296308 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072422, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296312 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072445, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7626724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296316 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:17.296320 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072420, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296327 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072445, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7626724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-24 03:42:17.296331 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:17.296335 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072418, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7563381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:17.296342 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072410, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7529566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:17.296350 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072433, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849194 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072399, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7500083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849303 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072446, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.763411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849315 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072430, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.760301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849345 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072405, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.751904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849354 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072402, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.750488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849377 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072422, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849387 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072420, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.757423, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849411 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072445, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7626724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-24 03:42:40.849421 | orchestrator | 2026-03-24 03:42:40.849432 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-24 03:42:40.849442 | orchestrator | Tuesday 24 March 2026 03:42:22 +0000 (0:00:21.030) 0:00:43.221 ********* 2026-03-24 03:42:40.849452 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 03:42:40.849462 | orchestrator | 2026-03-24 03:42:40.849471 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-24 03:42:40.849480 | orchestrator | Tuesday 24 March 2026 03:42:23 +0000 (0:00:00.699) 0:00:43.921 ********* 2026-03-24 03:42:40.849496 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:40.849506 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849515 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-24 03:42:40.849524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849532 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-24 03:42:40.849541 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:40.849550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849559 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-24 03:42:40.849567 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849577 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-24 03:42:40.849585 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:40.849594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849602 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-24 03:42:40.849611 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849619 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-24 03:42:40.849628 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:40.849637 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849645 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-24 03:42:40.849654 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849662 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-24 03:42:40.849671 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:40.849679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849716 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-24 03:42:40.849725 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849734 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-24 03:42:40.849744 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:40.849753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849763 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-24 03:42:40.849773 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849783 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-24 03:42:40.849793 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:40.849803 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849818 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-24 03:42:40.849828 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-24 03:42:40.849839 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-24 03:42:40.849849 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 03:42:40.849859 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:42:40.849869 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-24 03:42:40.849879 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-24 03:42:40.849889 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-24 03:42:40.849899 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-24 03:42:40.849907 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-24 03:42:40.849916 | orchestrator | 2026-03-24 03:42:40.849925 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-24 03:42:40.849933 | orchestrator | Tuesday 24 March 2026 03:42:25 +0000 (0:00:01.819) 0:00:45.740 ********* 2026-03-24 03:42:40.849942 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-24 03:42:40.849957 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:40.849966 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-24 03:42:40.849975 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:40.849984 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-24 03:42:40.849993 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:40.850007 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-24 03:42:56.593844 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.593933 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-24 03:42:56.593946 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.593953 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-24 03:42:56.593961 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.593966 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-24 03:42:56.593970 | orchestrator | 2026-03-24 03:42:56.593976 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-24 03:42:56.593981 | orchestrator | Tuesday 24 March 2026 03:42:40 +0000 (0:00:15.456) 0:01:01.197 ********* 2026-03-24 03:42:56.593985 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-24 03:42:56.593989 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:56.593993 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-24 03:42:56.593997 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:56.594001 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-24 03:42:56.594005 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.594009 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-24 03:42:56.594044 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594048 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-24 03:42:56.594052 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594056 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-24 03:42:56.594060 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594064 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-24 03:42:56.594068 | orchestrator | 2026-03-24 03:42:56.594072 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-24 03:42:56.594076 | orchestrator | Tuesday 24 March 2026 03:42:43 +0000 (0:00:02.549) 0:01:03.746 ********* 2026-03-24 03:42:56.594080 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-24 03:42:56.594085 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:56.594090 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-24 03:42:56.594094 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:56.594098 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-24 03:42:56.594102 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.594105 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-24 03:42:56.594109 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594113 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-24 03:42:56.594135 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-24 03:42:56.594139 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594143 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-24 03:42:56.594156 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594160 | orchestrator | 2026-03-24 03:42:56.594164 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-24 03:42:56.594168 | orchestrator | Tuesday 24 March 2026 03:42:44 +0000 (0:00:01.495) 0:01:05.242 ********* 2026-03-24 03:42:56.594172 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 03:42:56.594176 | orchestrator | 2026-03-24 03:42:56.594180 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-24 03:42:56.594184 | orchestrator | Tuesday 24 March 2026 03:42:45 +0000 (0:00:00.666) 0:01:05.908 ********* 2026-03-24 03:42:56.594188 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:42:56.594192 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:56.594195 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:56.594199 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.594203 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594206 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594210 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594214 | orchestrator | 2026-03-24 03:42:56.594218 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-24 03:42:56.594221 | orchestrator | Tuesday 24 March 2026 03:42:46 +0000 (0:00:00.759) 0:01:06.667 ********* 2026-03-24 03:42:56.594225 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:42:56.594229 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594233 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594236 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594240 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:42:56.594244 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:42:56.594248 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:42:56.594251 | orchestrator | 2026-03-24 03:42:56.594255 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-24 03:42:56.594271 | orchestrator | Tuesday 24 March 2026 03:42:48 +0000 (0:00:02.050) 0:01:08.718 ********* 2026-03-24 03:42:56.594275 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-24 03:42:56.594279 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:42:56.594283 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-24 03:42:56.594286 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:56.594290 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-24 03:42:56.594294 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-24 03:42:56.594298 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-24 03:42:56.594302 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:56.594305 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.594309 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594313 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-24 03:42:56.594317 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594321 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-24 03:42:56.594327 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594333 | orchestrator | 2026-03-24 03:42:56.594339 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-24 03:42:56.594350 | orchestrator | Tuesday 24 March 2026 03:42:49 +0000 (0:00:01.413) 0:01:10.131 ********* 2026-03-24 03:42:56.594360 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-24 03:42:56.594369 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-24 03:42:56.594375 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:56.594381 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-24 03:42:56.594387 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:56.594393 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.594400 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-24 03:42:56.594406 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594412 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-24 03:42:56.594418 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594424 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-24 03:42:56.594430 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-24 03:42:56.594437 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594444 | orchestrator | 2026-03-24 03:42:56.594451 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-24 03:42:56.594457 | orchestrator | Tuesday 24 March 2026 03:42:51 +0000 (0:00:01.453) 0:01:11.585 ********* 2026-03-24 03:42:56.594464 | orchestrator | [WARNING]: Skipped 2026-03-24 03:42:56.594472 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-24 03:42:56.594478 | orchestrator | due to this access issue: 2026-03-24 03:42:56.594485 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-24 03:42:56.594491 | orchestrator | not a directory 2026-03-24 03:42:56.594497 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 03:42:56.594504 | orchestrator | 2026-03-24 03:42:56.594510 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-24 03:42:56.594520 | orchestrator | Tuesday 24 March 2026 03:42:52 +0000 (0:00:01.112) 0:01:12.698 ********* 2026-03-24 03:42:56.594525 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:42:56.594530 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:56.594534 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:56.594538 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.594543 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594547 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594551 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594556 | orchestrator | 2026-03-24 03:42:56.594560 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-24 03:42:56.594565 | orchestrator | Tuesday 24 March 2026 03:42:53 +0000 (0:00:00.906) 0:01:13.604 ********* 2026-03-24 03:42:56.594569 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:42:56.594573 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:42:56.594578 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:42:56.594582 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:42:56.594586 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:42:56.594590 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:42:56.594594 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:42:56.594599 | orchestrator | 2026-03-24 03:42:56.594603 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-24 03:42:56.594607 | orchestrator | Tuesday 24 March 2026 03:42:54 +0000 (0:00:00.890) 0:01:14.494 ********* 2026-03-24 03:42:56.594621 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-24 03:42:58.426201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:42:58.426276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:42:58.426282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:42:58.426287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:42:58.426301 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:42:58.426307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:58.426311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:58.426341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:42:58.426346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-24 03:42:58.426350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:58.426356 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:58.426360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:58.426367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:58.426371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:42:58.426379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:42:58.426389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:43:00.277780 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-24 03:43:00.277865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:43:00.277872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:43:00.277890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:43:00.277908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-24 03:43:00.277913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:43:00.277929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:43:00.277966 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:43:00.277972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-24 03:43:00.277976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:43:00.277984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:43:00.277988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 03:43:00.277997 | orchestrator | 2026-03-24 03:43:00.278002 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-24 03:43:00.278007 | orchestrator | Tuesday 24 March 2026 03:42:58 +0000 (0:00:04.284) 0:01:18.779 ********* 2026-03-24 03:43:00.278040 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-24 03:43:00.278046 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:43:00.278050 | orchestrator | 2026-03-24 03:43:00.278054 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-24 03:43:00.278058 | orchestrator | Tuesday 24 March 2026 03:42:59 +0000 (0:00:01.187) 0:01:19.966 ********* 2026-03-24 03:43:00.278062 | orchestrator | 2026-03-24 03:43:00.278066 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-24 03:43:00.278070 | orchestrator | Tuesday 24 March 2026 03:42:59 +0000 (0:00:00.221) 0:01:20.187 ********* 2026-03-24 03:43:00.278073 | orchestrator | 2026-03-24 03:43:00.278077 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-24 03:43:00.278081 | orchestrator | Tuesday 24 March 2026 03:42:59 +0000 (0:00:00.069) 0:01:20.257 ********* 2026-03-24 03:43:00.278085 | orchestrator | 2026-03-24 03:43:00.278088 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-24 03:43:00.278092 | orchestrator | Tuesday 24 March 2026 03:42:59 +0000 (0:00:00.067) 0:01:20.325 ********* 2026-03-24 03:43:00.278096 | orchestrator | 2026-03-24 03:43:00.278100 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-24 03:43:00.278104 | orchestrator | Tuesday 24 March 2026 03:43:00 +0000 (0:00:00.066) 0:01:20.392 ********* 2026-03-24 03:43:00.278107 | orchestrator | 2026-03-24 03:43:00.278111 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-24 03:43:00.278115 | orchestrator | Tuesday 24 March 2026 03:43:00 +0000 (0:00:00.068) 0:01:20.460 ********* 2026-03-24 03:43:00.278119 | orchestrator | 2026-03-24 03:43:00.278123 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-24 03:43:00.278130 | orchestrator | Tuesday 24 March 2026 03:43:00 +0000 (0:00:00.064) 0:01:20.524 ********* 2026-03-24 03:44:41.686205 | orchestrator | 2026-03-24 03:44:41.686285 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-24 03:44:41.686293 | orchestrator | Tuesday 24 March 2026 03:43:00 +0000 (0:00:00.090) 0:01:20.615 ********* 2026-03-24 03:44:41.686297 | orchestrator | changed: [testbed-manager] 2026-03-24 03:44:41.686302 | orchestrator | 2026-03-24 03:44:41.686306 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-24 03:44:41.686310 | orchestrator | Tuesday 24 March 2026 03:43:21 +0000 (0:00:21.590) 0:01:42.206 ********* 2026-03-24 03:44:41.686315 | orchestrator | changed: [testbed-manager] 2026-03-24 03:44:41.686319 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:44:41.686323 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:44:41.686326 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:44:41.686330 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:44:41.686334 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:44:41.686338 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:44:41.686342 | orchestrator | 2026-03-24 03:44:41.686346 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-24 03:44:41.686350 | orchestrator | Tuesday 24 March 2026 03:43:34 +0000 (0:00:12.272) 0:01:54.478 ********* 2026-03-24 03:44:41.686354 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:44:41.686358 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:44:41.686377 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:44:41.686382 | orchestrator | 2026-03-24 03:44:41.686385 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-24 03:44:41.686390 | orchestrator | Tuesday 24 March 2026 03:43:44 +0000 (0:00:10.360) 0:02:04.839 ********* 2026-03-24 03:44:41.686393 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:44:41.686397 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:44:41.686401 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:44:41.686405 | orchestrator | 2026-03-24 03:44:41.686409 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-24 03:44:41.686412 | orchestrator | Tuesday 24 March 2026 03:43:54 +0000 (0:00:10.074) 0:02:14.913 ********* 2026-03-24 03:44:41.686416 | orchestrator | changed: [testbed-manager] 2026-03-24 03:44:41.686420 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:44:41.686424 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:44:41.686427 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:44:41.686431 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:44:41.686435 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:44:41.686439 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:44:41.686446 | orchestrator | 2026-03-24 03:44:41.686452 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-24 03:44:41.686458 | orchestrator | Tuesday 24 March 2026 03:44:08 +0000 (0:00:14.089) 0:02:29.002 ********* 2026-03-24 03:44:41.686465 | orchestrator | changed: [testbed-manager] 2026-03-24 03:44:41.686472 | orchestrator | 2026-03-24 03:44:41.686490 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-24 03:44:41.686503 | orchestrator | Tuesday 24 March 2026 03:44:16 +0000 (0:00:07.705) 0:02:36.708 ********* 2026-03-24 03:44:41.686509 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:44:41.686515 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:44:41.686532 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:44:41.686539 | orchestrator | 2026-03-24 03:44:41.686544 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-24 03:44:41.686550 | orchestrator | Tuesday 24 March 2026 03:44:26 +0000 (0:00:10.150) 0:02:46.859 ********* 2026-03-24 03:44:41.686556 | orchestrator | changed: [testbed-manager] 2026-03-24 03:44:41.686562 | orchestrator | 2026-03-24 03:44:41.686567 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-24 03:44:41.686573 | orchestrator | Tuesday 24 March 2026 03:44:31 +0000 (0:00:05.045) 0:02:51.904 ********* 2026-03-24 03:44:41.686579 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:44:41.686585 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:44:41.686591 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:44:41.686597 | orchestrator | 2026-03-24 03:44:41.686602 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:44:41.686610 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-24 03:44:41.686618 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-24 03:44:41.686624 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-24 03:44:41.686675 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-24 03:44:41.686682 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-24 03:44:41.686688 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-24 03:44:41.686695 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-24 03:44:41.686709 | orchestrator | 2026-03-24 03:44:41.686716 | orchestrator | 2026-03-24 03:44:41.686720 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:44:41.686724 | orchestrator | Tuesday 24 March 2026 03:44:41 +0000 (0:00:09.666) 0:03:01.571 ********* 2026-03-24 03:44:41.686728 | orchestrator | =============================================================================== 2026-03-24 03:44:41.686734 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.59s 2026-03-24 03:44:41.686758 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.03s 2026-03-24 03:44:41.686766 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.46s 2026-03-24 03:44:41.686772 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.09s 2026-03-24 03:44:41.686778 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.27s 2026-03-24 03:44:41.686783 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.36s 2026-03-24 03:44:41.686789 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.15s 2026-03-24 03:44:41.686794 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.07s 2026-03-24 03:44:41.686799 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.67s 2026-03-24 03:44:41.686805 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.71s 2026-03-24 03:44:41.686811 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.60s 2026-03-24 03:44:41.686817 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.52s 2026-03-24 03:44:41.686823 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.05s 2026-03-24 03:44:41.686829 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.28s 2026-03-24 03:44:41.686836 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.69s 2026-03-24 03:44:41.686841 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.55s 2026-03-24 03:44:41.686847 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.05s 2026-03-24 03:44:41.686854 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.85s 2026-03-24 03:44:41.686860 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.82s 2026-03-24 03:44:41.686867 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.50s 2026-03-24 03:44:44.020542 | orchestrator | 2026-03-24 03:44:44 | INFO  | Task 9c0d5d98-f76c-4045-8ce6-1968170c2354 (grafana) was prepared for execution. 2026-03-24 03:44:44.020691 | orchestrator | 2026-03-24 03:44:44 | INFO  | It takes a moment until task 9c0d5d98-f76c-4045-8ce6-1968170c2354 (grafana) has been started and output is visible here. 2026-03-24 03:44:53.727303 | orchestrator | 2026-03-24 03:44:53.727442 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:44:53.727461 | orchestrator | 2026-03-24 03:44:53.727474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:44:53.727506 | orchestrator | Tuesday 24 March 2026 03:44:48 +0000 (0:00:00.268) 0:00:00.268 ********* 2026-03-24 03:44:53.727535 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:44:53.727558 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:44:53.727575 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:44:53.727594 | orchestrator | 2026-03-24 03:44:53.727610 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:44:53.727767 | orchestrator | Tuesday 24 March 2026 03:44:48 +0000 (0:00:00.318) 0:00:00.587 ********* 2026-03-24 03:44:53.727793 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-24 03:44:53.727814 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-24 03:44:53.727865 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-24 03:44:53.727879 | orchestrator | 2026-03-24 03:44:53.727893 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-24 03:44:53.727905 | orchestrator | 2026-03-24 03:44:53.727918 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-24 03:44:53.727930 | orchestrator | Tuesday 24 March 2026 03:44:48 +0000 (0:00:00.417) 0:00:01.004 ********* 2026-03-24 03:44:53.727944 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:44:53.727957 | orchestrator | 2026-03-24 03:44:53.727969 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-24 03:44:53.727982 | orchestrator | Tuesday 24 March 2026 03:44:49 +0000 (0:00:00.536) 0:00:01.541 ********* 2026-03-24 03:44:53.727998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:44:53.728016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:44:53.728030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:44:53.728043 | orchestrator | 2026-03-24 03:44:53.728060 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-24 03:44:53.728080 | orchestrator | Tuesday 24 March 2026 03:44:50 +0000 (0:00:00.856) 0:00:02.397 ********* 2026-03-24 03:44:53.728098 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-24 03:44:53.728116 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-24 03:44:53.728136 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:44:53.728154 | orchestrator | 2026-03-24 03:44:53.728173 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-24 03:44:53.728193 | orchestrator | Tuesday 24 March 2026 03:44:51 +0000 (0:00:00.841) 0:00:03.238 ********* 2026-03-24 03:44:53.728212 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:44:53.728231 | orchestrator | 2026-03-24 03:44:53.728250 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-24 03:44:53.728275 | orchestrator | Tuesday 24 March 2026 03:44:51 +0000 (0:00:00.558) 0:00:03.796 ********* 2026-03-24 03:44:53.728320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:44:53.728333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:44:53.728345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:44:53.728357 | orchestrator | 2026-03-24 03:44:53.728368 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-24 03:44:53.728379 | orchestrator | Tuesday 24 March 2026 03:44:53 +0000 (0:00:01.360) 0:00:05.157 ********* 2026-03-24 03:44:53.728390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 03:44:53.728402 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:44:53.728413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 03:44:53.728425 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:44:53.728451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 03:45:00.565222 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:45:00.565299 | orchestrator | 2026-03-24 03:45:00.565306 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-24 03:45:00.565312 | orchestrator | Tuesday 24 March 2026 03:44:53 +0000 (0:00:00.650) 0:00:05.807 ********* 2026-03-24 03:45:00.565317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 03:45:00.565324 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:45:00.565329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 03:45:00.565333 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:45:00.565337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-24 03:45:00.565341 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:45:00.565345 | orchestrator | 2026-03-24 03:45:00.565349 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-24 03:45:00.565353 | orchestrator | Tuesday 24 March 2026 03:44:54 +0000 (0:00:00.602) 0:00:06.410 ********* 2026-03-24 03:45:00.565357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:45:00.565378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:45:00.565402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:45:00.565407 | orchestrator | 2026-03-24 03:45:00.565411 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-24 03:45:00.565415 | orchestrator | Tuesday 24 March 2026 03:44:55 +0000 (0:00:01.385) 0:00:07.795 ********* 2026-03-24 03:45:00.565418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:45:00.565423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:45:00.565427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:45:00.565430 | orchestrator | 2026-03-24 03:45:00.565438 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-24 03:45:00.565442 | orchestrator | Tuesday 24 March 2026 03:44:57 +0000 (0:00:01.618) 0:00:09.413 ********* 2026-03-24 03:45:00.565446 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:45:00.565449 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:45:00.565453 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:45:00.565457 | orchestrator | 2026-03-24 03:45:00.565461 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-24 03:45:00.565465 | orchestrator | Tuesday 24 March 2026 03:44:57 +0000 (0:00:00.319) 0:00:09.733 ********* 2026-03-24 03:45:00.565469 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-24 03:45:00.565474 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-24 03:45:00.565477 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-24 03:45:00.565481 | orchestrator | 2026-03-24 03:45:00.565485 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-24 03:45:00.565489 | orchestrator | Tuesday 24 March 2026 03:44:58 +0000 (0:00:01.272) 0:00:11.005 ********* 2026-03-24 03:45:00.565493 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-24 03:45:00.565498 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-24 03:45:00.565502 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-24 03:45:00.565506 | orchestrator | 2026-03-24 03:45:00.565510 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-24 03:45:00.565519 | orchestrator | Tuesday 24 March 2026 03:45:00 +0000 (0:00:01.635) 0:00:12.641 ********* 2026-03-24 03:45:07.050521 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:45:07.050653 | orchestrator | 2026-03-24 03:45:07.050675 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-24 03:45:07.050701 | orchestrator | Tuesday 24 March 2026 03:45:01 +0000 (0:00:00.684) 0:00:13.326 ********* 2026-03-24 03:45:07.050725 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-24 03:45:07.050739 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-24 03:45:07.050753 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:45:07.050792 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:45:07.050805 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:45:07.050819 | orchestrator | 2026-03-24 03:45:07.050832 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-24 03:45:07.050845 | orchestrator | Tuesday 24 March 2026 03:45:01 +0000 (0:00:00.746) 0:00:14.072 ********* 2026-03-24 03:45:07.050858 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:45:07.050872 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:45:07.050884 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:45:07.050897 | orchestrator | 2026-03-24 03:45:07.050910 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-24 03:45:07.050923 | orchestrator | Tuesday 24 March 2026 03:45:02 +0000 (0:00:00.332) 0:00:14.404 ********* 2026-03-24 03:45:07.050940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1072142, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6584866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.050983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1072142, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6584866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.050996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1072142, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6584866, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1072253, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6914902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1072253, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6914902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1072253, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6914902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1072161, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.660955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1072161, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.660955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1072161, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.660955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1072254, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6929557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1072254, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6929557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:07.051181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1072254, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6929557, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.814325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1072185, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6662607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.814445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1072185, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6662607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.814490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1072185, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6662607, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.814502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1072240, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6901045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.814526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1072240, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6901045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1072240, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6901045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1072139, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6572974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1072139, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6572974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1072139, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6572974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1072150, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6591585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1072150, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6591585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1072150, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6591585, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:10.815582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1072166, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6623087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1072166, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6623087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1072166, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6623087, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1072194, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6849556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1072194, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6849556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1072194, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6849556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1072247, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.69075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1072247, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.69075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1072247, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.69075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072155, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6601336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072155, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6601336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072155, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6601336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1072236, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6879556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:14.378947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1072236, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6879556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.778689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1072236, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6879556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.778879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1072189, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6672578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.778906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1072189, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6672578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.778925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1072189, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6672578, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.778944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1072180, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6657753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.778992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1072180, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6657753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.779083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1072180, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6657753, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.779105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1072177, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6643116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.779125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1072177, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6643116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.779144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1072177, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6643116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.779162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1072234, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6869555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.779191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1072234, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6869555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:18.779236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1072234, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6869555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1072172, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6643116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1072172, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6643116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1072172, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6643116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1072245, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6901045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1072245, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6901045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1072245, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6901045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072392, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7482615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072392, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7482615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072392, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7482615, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1072306, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.713956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1072306, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.713956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1072306, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.713956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:22.392803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1072279, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6994083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1072279, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6994083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1072279, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6994083, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1072324, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7181642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1072324, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7181642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1072324, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7181642, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1072265, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6946337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1072265, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6946337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1072265, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6946337, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072350, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7318783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072350, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7318783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072350, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7318783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072328, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.727894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:26.165996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072328, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.727894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.445928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072328, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.727894, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1072356, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7328577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1072356, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7328577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1072356, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7328577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072389, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7469566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072389, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7469566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072389, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7469566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1072348, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7299564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1072348, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7299564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1072348, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7299564, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072319, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7165768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072319, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7165768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:30.446206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072319, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7165768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1072295, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7079558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1072295, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7079558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1072295, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7079558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072316, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.714956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072316, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.714956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072316, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.714956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1072284, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.703442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1072284, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.703442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1072284, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.703442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1072321, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7170947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1072321, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7170947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1072321, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7170947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:34.096955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072387, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7449565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072387, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7449565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072387, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7449565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072384, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7437289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072384, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7437289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072384, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7437289, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1072267, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6953058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1072267, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6953058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1072267, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6953058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.440995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1072271, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6979556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.441003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1072271, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6979556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.441012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1072271, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.6979556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.441020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072345, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7289562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:45:38.441034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072345, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7289562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:47:06.802224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072345, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7289562, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:47:06.802314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1072381, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7408566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:47:06.802336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1072381, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7408566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:47:06.802344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1072381, 'dev': 95, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774316913.7408566, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-24 03:47:06.802350 | orchestrator | 2026-03-24 03:47:06.802357 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-24 03:47:06.802364 | orchestrator | Tuesday 24 March 2026 03:45:39 +0000 (0:00:37.255) 0:00:51.660 ********* 2026-03-24 03:47:06.802370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:47:06.802388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:47:06.802411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-24 03:47:06.802417 | orchestrator | 2026-03-24 03:47:06.802423 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-24 03:47:06.802429 | orchestrator | Tuesday 24 March 2026 03:45:40 +0000 (0:00:01.019) 0:00:52.680 ********* 2026-03-24 03:47:06.802434 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:47:06.802441 | orchestrator | 2026-03-24 03:47:06.802447 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-24 03:47:06.802452 | orchestrator | Tuesday 24 March 2026 03:45:43 +0000 (0:00:02.471) 0:00:55.152 ********* 2026-03-24 03:47:06.802457 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:47:06.802463 | orchestrator | 2026-03-24 03:47:06.802468 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-24 03:47:06.802474 | orchestrator | Tuesday 24 March 2026 03:45:45 +0000 (0:00:02.501) 0:00:57.653 ********* 2026-03-24 03:47:06.802479 | orchestrator | 2026-03-24 03:47:06.802485 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-24 03:47:06.802494 | orchestrator | Tuesday 24 March 2026 03:45:45 +0000 (0:00:00.074) 0:00:57.728 ********* 2026-03-24 03:47:06.802500 | orchestrator | 2026-03-24 03:47:06.802505 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-24 03:47:06.802511 | orchestrator | Tuesday 24 March 2026 03:45:45 +0000 (0:00:00.089) 0:00:57.818 ********* 2026-03-24 03:47:06.802516 | orchestrator | 2026-03-24 03:47:06.802522 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-24 03:47:06.802527 | orchestrator | Tuesday 24 March 2026 03:45:45 +0000 (0:00:00.078) 0:00:57.896 ********* 2026-03-24 03:47:06.802533 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:47:06.802539 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:47:06.802544 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:47:06.802550 | orchestrator | 2026-03-24 03:47:06.802555 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-24 03:47:06.802561 | orchestrator | Tuesday 24 March 2026 03:45:52 +0000 (0:00:07.107) 0:01:05.004 ********* 2026-03-24 03:47:06.802566 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:47:06.802636 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:47:06.802642 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-24 03:47:06.802649 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-24 03:47:06.802655 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-24 03:47:06.802661 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:47:06.802667 | orchestrator | 2026-03-24 03:47:06.802673 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-24 03:47:06.802684 | orchestrator | Tuesday 24 March 2026 03:46:31 +0000 (0:00:38.994) 0:01:43.998 ********* 2026-03-24 03:47:06.802690 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:47:06.802695 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:47:06.802701 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:47:06.802706 | orchestrator | 2026-03-24 03:47:06.802712 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-24 03:47:06.802717 | orchestrator | Tuesday 24 March 2026 03:47:01 +0000 (0:00:29.345) 0:02:13.343 ********* 2026-03-24 03:47:06.802723 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:47:06.802728 | orchestrator | 2026-03-24 03:47:06.802734 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-24 03:47:06.802739 | orchestrator | Tuesday 24 March 2026 03:47:03 +0000 (0:00:02.487) 0:02:15.831 ********* 2026-03-24 03:47:06.802744 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:47:06.802750 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:47:06.802755 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:47:06.802761 | orchestrator | 2026-03-24 03:47:06.802766 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-24 03:47:06.802772 | orchestrator | Tuesday 24 March 2026 03:47:04 +0000 (0:00:00.313) 0:02:16.144 ********* 2026-03-24 03:47:06.802779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-24 03:47:06.802793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-24 03:47:07.359964 | orchestrator | 2026-03-24 03:47:07.360058 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-24 03:47:07.360069 | orchestrator | Tuesday 24 March 2026 03:47:06 +0000 (0:00:02.732) 0:02:18.876 ********* 2026-03-24 03:47:07.360076 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:47:07.360083 | orchestrator | 2026-03-24 03:47:07.360090 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:47:07.360098 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:47:07.360106 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:47:07.360112 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-24 03:47:07.360118 | orchestrator | 2026-03-24 03:47:07.360125 | orchestrator | 2026-03-24 03:47:07.360131 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:47:07.360137 | orchestrator | Tuesday 24 March 2026 03:47:07 +0000 (0:00:00.270) 0:02:19.147 ********* 2026-03-24 03:47:07.360143 | orchestrator | =============================================================================== 2026-03-24 03:47:07.360149 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.99s 2026-03-24 03:47:07.360156 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.26s 2026-03-24 03:47:07.360162 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 29.35s 2026-03-24 03:47:07.360168 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.11s 2026-03-24 03:47:07.360189 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.73s 2026-03-24 03:47:07.360236 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.50s 2026-03-24 03:47:07.360243 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.49s 2026-03-24 03:47:07.360249 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2026-03-24 03:47:07.360255 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.64s 2026-03-24 03:47:07.360261 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.62s 2026-03-24 03:47:07.360268 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.39s 2026-03-24 03:47:07.360274 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.36s 2026-03-24 03:47:07.360280 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2026-03-24 03:47:07.360286 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2026-03-24 03:47:07.360291 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.86s 2026-03-24 03:47:07.360298 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.84s 2026-03-24 03:47:07.360303 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2026-03-24 03:47:07.360309 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.68s 2026-03-24 03:47:07.360315 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.65s 2026-03-24 03:47:07.360323 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.60s 2026-03-24 03:47:07.623150 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-24 03:47:07.632425 | orchestrator | + set -e 2026-03-24 03:47:07.632500 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 03:47:07.632511 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 03:47:07.632519 | orchestrator | ++ INTERACTIVE=false 2026-03-24 03:47:07.633209 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 03:47:07.633229 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 03:47:07.633237 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 03:47:07.633246 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 03:47:07.633255 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 03:47:07.633263 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 03:47:07.633272 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 03:47:07.633280 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 03:47:07.633289 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 03:47:07.633298 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 03:47:07.633307 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 03:47:07.633315 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 03:47:07.633325 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 03:47:07.633333 | orchestrator | ++ export ARA=false 2026-03-24 03:47:07.633342 | orchestrator | ++ ARA=false 2026-03-24 03:47:07.633350 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 03:47:07.633359 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 03:47:07.633366 | orchestrator | ++ export TEMPEST=false 2026-03-24 03:47:07.633374 | orchestrator | ++ TEMPEST=false 2026-03-24 03:47:07.633381 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 03:47:07.633388 | orchestrator | ++ IS_ZUUL=true 2026-03-24 03:47:07.633396 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 03:47:07.633403 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 03:47:07.633410 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 03:47:07.633417 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 03:47:07.633424 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 03:47:07.633431 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 03:47:07.633439 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 03:47:07.633446 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 03:47:07.633453 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 03:47:07.633460 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 03:47:07.634251 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-24 03:47:07.695884 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 03:47:07.695963 | orchestrator | + osism apply clusterapi 2026-03-24 03:47:09.698235 | orchestrator | 2026-03-24 03:47:09 | INFO  | Task 560a9ec8-e9fd-4cc6-a31d-af99017d0407 (clusterapi) was prepared for execution. 2026-03-24 03:47:09.698331 | orchestrator | 2026-03-24 03:47:09 | INFO  | It takes a moment until task 560a9ec8-e9fd-4cc6-a31d-af99017d0407 (clusterapi) has been started and output is visible here. 2026-03-24 03:48:06.790825 | orchestrator | 2026-03-24 03:48:06.790915 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-24 03:48:06.790923 | orchestrator | 2026-03-24 03:48:06.790929 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-24 03:48:06.790936 | orchestrator | Tuesday 24 March 2026 03:47:13 +0000 (0:00:00.137) 0:00:00.137 ********* 2026-03-24 03:48:06.790942 | orchestrator | included: cert_manager for testbed-manager 2026-03-24 03:48:06.790948 | orchestrator | 2026-03-24 03:48:06.790953 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-24 03:48:06.790959 | orchestrator | Tuesday 24 March 2026 03:47:13 +0000 (0:00:00.177) 0:00:00.314 ********* 2026-03-24 03:48:06.790964 | orchestrator | changed: [testbed-manager] 2026-03-24 03:48:06.790971 | orchestrator | 2026-03-24 03:48:06.790976 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-24 03:48:06.790981 | orchestrator | Tuesday 24 March 2026 03:47:18 +0000 (0:00:05.082) 0:00:05.397 ********* 2026-03-24 03:48:06.790987 | orchestrator | changed: [testbed-manager] 2026-03-24 03:48:06.790992 | orchestrator | 2026-03-24 03:48:06.790997 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-24 03:48:06.791002 | orchestrator | 2026-03-24 03:48:06.791007 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-24 03:48:06.791013 | orchestrator | Tuesday 24 March 2026 03:47:46 +0000 (0:00:27.500) 0:00:32.898 ********* 2026-03-24 03:48:06.791018 | orchestrator | ok: [testbed-manager] 2026-03-24 03:48:06.791023 | orchestrator | 2026-03-24 03:48:06.791029 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-24 03:48:06.791037 | orchestrator | Tuesday 24 March 2026 03:47:47 +0000 (0:00:00.964) 0:00:33.863 ********* 2026-03-24 03:48:06.791045 | orchestrator | ok: [testbed-manager] 2026-03-24 03:48:06.791056 | orchestrator | 2026-03-24 03:48:06.791069 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-24 03:48:06.791077 | orchestrator | Tuesday 24 March 2026 03:47:47 +0000 (0:00:00.119) 0:00:33.982 ********* 2026-03-24 03:48:06.791086 | orchestrator | ok: [testbed-manager] 2026-03-24 03:48:06.791094 | orchestrator | 2026-03-24 03:48:06.791102 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-24 03:48:06.791128 | orchestrator | Tuesday 24 March 2026 03:48:04 +0000 (0:00:16.745) 0:00:50.727 ********* 2026-03-24 03:48:06.791137 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:48:06.791145 | orchestrator | 2026-03-24 03:48:06.791154 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-24 03:48:06.791163 | orchestrator | Tuesday 24 March 2026 03:48:04 +0000 (0:00:00.131) 0:00:50.859 ********* 2026-03-24 03:48:06.791172 | orchestrator | changed: [testbed-manager] 2026-03-24 03:48:06.791179 | orchestrator | 2026-03-24 03:48:06.791187 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:48:06.791198 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 03:48:06.791208 | orchestrator | 2026-03-24 03:48:06.791216 | orchestrator | 2026-03-24 03:48:06.791224 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:48:06.791233 | orchestrator | Tuesday 24 March 2026 03:48:06 +0000 (0:00:02.118) 0:00:52.978 ********* 2026-03-24 03:48:06.791242 | orchestrator | =============================================================================== 2026-03-24 03:48:06.791250 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 27.50s 2026-03-24 03:48:06.791259 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.75s 2026-03-24 03:48:06.791268 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.08s 2026-03-24 03:48:06.791277 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.12s 2026-03-24 03:48:06.791309 | orchestrator | Get capi-system namespace phase ----------------------------------------- 0.96s 2026-03-24 03:48:06.791318 | orchestrator | Include cert_manager role ----------------------------------------------- 0.18s 2026-03-24 03:48:06.791327 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.13s 2026-03-24 03:48:06.791336 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.12s 2026-03-24 03:48:07.054345 | orchestrator | + osism apply magnum 2026-03-24 03:48:09.010999 | orchestrator | 2026-03-24 03:48:09 | INFO  | Task 6adfe45d-2cd9-4de6-ac95-d62b00ef6c1b (magnum) was prepared for execution. 2026-03-24 03:48:09.011080 | orchestrator | 2026-03-24 03:48:09 | INFO  | It takes a moment until task 6adfe45d-2cd9-4de6-ac95-d62b00ef6c1b (magnum) has been started and output is visible here. 2026-03-24 03:48:52.827218 | orchestrator | 2026-03-24 03:48:52.827315 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:48:52.827325 | orchestrator | 2026-03-24 03:48:52.827333 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:48:52.827340 | orchestrator | Tuesday 24 March 2026 03:48:13 +0000 (0:00:00.261) 0:00:00.261 ********* 2026-03-24 03:48:52.827348 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:48:52.827355 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:48:52.827361 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:48:52.827368 | orchestrator | 2026-03-24 03:48:52.827374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:48:52.827380 | orchestrator | Tuesday 24 March 2026 03:48:13 +0000 (0:00:00.346) 0:00:00.607 ********* 2026-03-24 03:48:52.827387 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-24 03:48:52.827394 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-24 03:48:52.827400 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-24 03:48:52.827406 | orchestrator | 2026-03-24 03:48:52.827413 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-24 03:48:52.827419 | orchestrator | 2026-03-24 03:48:52.827425 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-24 03:48:52.827431 | orchestrator | Tuesday 24 March 2026 03:48:13 +0000 (0:00:00.421) 0:00:01.029 ********* 2026-03-24 03:48:52.827438 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:48:52.827445 | orchestrator | 2026-03-24 03:48:52.827451 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-24 03:48:52.827457 | orchestrator | Tuesday 24 March 2026 03:48:14 +0000 (0:00:00.536) 0:00:01.565 ********* 2026-03-24 03:48:52.827464 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-24 03:48:52.827470 | orchestrator | 2026-03-24 03:48:52.827476 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-24 03:48:52.827482 | orchestrator | Tuesday 24 March 2026 03:48:18 +0000 (0:00:03.741) 0:00:05.307 ********* 2026-03-24 03:48:52.827488 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-24 03:48:52.827495 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-24 03:48:52.827501 | orchestrator | 2026-03-24 03:48:52.827508 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-24 03:48:52.827594 | orchestrator | Tuesday 24 March 2026 03:48:24 +0000 (0:00:06.799) 0:00:12.106 ********* 2026-03-24 03:48:52.827601 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-24 03:48:52.827607 | orchestrator | 2026-03-24 03:48:52.827613 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-24 03:48:52.827619 | orchestrator | Tuesday 24 March 2026 03:48:28 +0000 (0:00:03.513) 0:00:15.619 ********* 2026-03-24 03:48:52.827626 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-24 03:48:52.827632 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-24 03:48:52.827662 | orchestrator | 2026-03-24 03:48:52.827678 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-24 03:48:52.827691 | orchestrator | Tuesday 24 March 2026 03:48:32 +0000 (0:00:03.965) 0:00:19.585 ********* 2026-03-24 03:48:52.827717 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-24 03:48:52.827728 | orchestrator | 2026-03-24 03:48:52.827738 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-24 03:48:52.827749 | orchestrator | Tuesday 24 March 2026 03:48:35 +0000 (0:00:03.550) 0:00:23.136 ********* 2026-03-24 03:48:52.827760 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-24 03:48:52.827770 | orchestrator | 2026-03-24 03:48:52.827781 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-24 03:48:52.827793 | orchestrator | Tuesday 24 March 2026 03:48:39 +0000 (0:00:04.070) 0:00:27.207 ********* 2026-03-24 03:48:52.827804 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:48:52.827815 | orchestrator | 2026-03-24 03:48:52.827826 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-24 03:48:52.827838 | orchestrator | Tuesday 24 March 2026 03:48:43 +0000 (0:00:03.561) 0:00:30.768 ********* 2026-03-24 03:48:52.827849 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:48:52.827859 | orchestrator | 2026-03-24 03:48:52.827867 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-24 03:48:52.827874 | orchestrator | Tuesday 24 March 2026 03:48:47 +0000 (0:00:04.135) 0:00:34.903 ********* 2026-03-24 03:48:52.827881 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:48:52.827888 | orchestrator | 2026-03-24 03:48:52.827895 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-24 03:48:52.827902 | orchestrator | Tuesday 24 March 2026 03:48:51 +0000 (0:00:03.543) 0:00:38.446 ********* 2026-03-24 03:48:52.827929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:48:52.827941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:48:52.827948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:48:52.827969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:48:52.827977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:48:52.827990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:00.046098 | orchestrator | 2026-03-24 03:49:00.046208 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-24 03:49:00.046225 | orchestrator | Tuesday 24 March 2026 03:48:52 +0000 (0:00:01.599) 0:00:40.046 ********* 2026-03-24 03:49:00.046236 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:49:00.046247 | orchestrator | 2026-03-24 03:49:00.046257 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-24 03:49:00.046267 | orchestrator | Tuesday 24 March 2026 03:48:52 +0000 (0:00:00.142) 0:00:40.188 ********* 2026-03-24 03:49:00.046277 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:49:00.046287 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:49:00.046297 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:49:00.046307 | orchestrator | 2026-03-24 03:49:00.046317 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-24 03:49:00.046327 | orchestrator | Tuesday 24 March 2026 03:48:53 +0000 (0:00:00.312) 0:00:40.500 ********* 2026-03-24 03:49:00.046336 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 03:49:00.046367 | orchestrator | 2026-03-24 03:49:00.046377 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-24 03:49:00.046387 | orchestrator | Tuesday 24 March 2026 03:48:54 +0000 (0:00:00.811) 0:00:41.311 ********* 2026-03-24 03:49:00.046399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:00.046427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:00.046438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:00.046467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:00.046479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:00.046497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:00.046507 | orchestrator | 2026-03-24 03:49:00.046539 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-24 03:49:00.046549 | orchestrator | Tuesday 24 March 2026 03:48:56 +0000 (0:00:02.415) 0:00:43.727 ********* 2026-03-24 03:49:00.046559 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:49:00.046570 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:49:00.046580 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:49:00.046589 | orchestrator | 2026-03-24 03:49:00.046599 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-24 03:49:00.046614 | orchestrator | Tuesday 24 March 2026 03:48:56 +0000 (0:00:00.472) 0:00:44.199 ********* 2026-03-24 03:49:00.046624 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:49:00.046634 | orchestrator | 2026-03-24 03:49:00.046644 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-24 03:49:00.046654 | orchestrator | Tuesday 24 March 2026 03:48:57 +0000 (0:00:00.523) 0:00:44.723 ********* 2026-03-24 03:49:00.046664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:00.046682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:00.861152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:00.861243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:00.861271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:00.861280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:00.861288 | orchestrator | 2026-03-24 03:49:00.861297 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-24 03:49:00.861305 | orchestrator | Tuesday 24 March 2026 03:49:00 +0000 (0:00:02.553) 0:00:47.277 ********* 2026-03-24 03:49:00.861325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:00.861353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:00.861361 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:49:00.861370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:00.861382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:00.861389 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:49:00.861397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:00.861415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:04.398964 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:49:04.399054 | orchestrator | 2026-03-24 03:49:04.399063 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-24 03:49:04.399069 | orchestrator | Tuesday 24 March 2026 03:49:00 +0000 (0:00:00.809) 0:00:48.086 ********* 2026-03-24 03:49:04.399075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:04.399096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:04.399100 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:49:04.399105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:04.399109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:04.399126 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:49:04.399142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:04.399147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:04.399151 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:49:04.399154 | orchestrator | 2026-03-24 03:49:04.399159 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-24 03:49:04.399162 | orchestrator | Tuesday 24 March 2026 03:49:01 +0000 (0:00:00.882) 0:00:48.969 ********* 2026-03-24 03:49:04.399170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:04.399175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:04.399186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:10.391907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:10.392026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:10.392055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:10.392064 | orchestrator | 2026-03-24 03:49:10.392072 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-24 03:49:10.392080 | orchestrator | Tuesday 24 March 2026 03:49:04 +0000 (0:00:02.659) 0:00:51.628 ********* 2026-03-24 03:49:10.392087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:10.392119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:10.392124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:10.392132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:10.392136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:10.392145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:10.392149 | orchestrator | 2026-03-24 03:49:10.392153 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-24 03:49:10.392157 | orchestrator | Tuesday 24 March 2026 03:49:09 +0000 (0:00:05.314) 0:00:56.943 ********* 2026-03-24 03:49:10.392166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:12.327244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:12.327338 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:49:12.327361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:12.327369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:12.327388 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:49:12.327393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-24 03:49:12.327409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 03:49:12.327415 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:49:12.327423 | orchestrator | 2026-03-24 03:49:12.327432 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-24 03:49:12.327440 | orchestrator | Tuesday 24 March 2026 03:49:10 +0000 (0:00:00.681) 0:00:57.624 ********* 2026-03-24 03:49:12.327449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:12.327462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:12.327477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-24 03:49:12.327485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:49:12.327499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:50:07.571345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-24 03:50:07.571429 | orchestrator | 2026-03-24 03:50:07.571438 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-24 03:50:07.571446 | orchestrator | Tuesday 24 March 2026 03:49:12 +0000 (0:00:01.931) 0:00:59.556 ********* 2026-03-24 03:50:07.571472 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:50:07.571541 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:50:07.571549 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:50:07.571555 | orchestrator | 2026-03-24 03:50:07.571561 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-24 03:50:07.571568 | orchestrator | Tuesday 24 March 2026 03:49:12 +0000 (0:00:00.457) 0:01:00.013 ********* 2026-03-24 03:50:07.571572 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:50:07.571576 | orchestrator | 2026-03-24 03:50:07.571580 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-24 03:50:07.571584 | orchestrator | Tuesday 24 March 2026 03:49:15 +0000 (0:00:02.353) 0:01:02.367 ********* 2026-03-24 03:50:07.571588 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:50:07.571591 | orchestrator | 2026-03-24 03:50:07.571595 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-24 03:50:07.571599 | orchestrator | Tuesday 24 March 2026 03:49:17 +0000 (0:00:02.371) 0:01:04.738 ********* 2026-03-24 03:50:07.571602 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:50:07.571606 | orchestrator | 2026-03-24 03:50:07.571610 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-24 03:50:07.571614 | orchestrator | Tuesday 24 March 2026 03:49:33 +0000 (0:00:16.182) 0:01:20.921 ********* 2026-03-24 03:50:07.571617 | orchestrator | 2026-03-24 03:50:07.571621 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-24 03:50:07.571625 | orchestrator | Tuesday 24 March 2026 03:49:33 +0000 (0:00:00.071) 0:01:20.993 ********* 2026-03-24 03:50:07.571629 | orchestrator | 2026-03-24 03:50:07.571633 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-24 03:50:07.571636 | orchestrator | Tuesday 24 March 2026 03:49:33 +0000 (0:00:00.071) 0:01:21.064 ********* 2026-03-24 03:50:07.571640 | orchestrator | 2026-03-24 03:50:07.571644 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-24 03:50:07.571647 | orchestrator | Tuesday 24 March 2026 03:49:33 +0000 (0:00:00.072) 0:01:21.137 ********* 2026-03-24 03:50:07.571651 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:50:07.571655 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:50:07.571659 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:50:07.571663 | orchestrator | 2026-03-24 03:50:07.571666 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-24 03:50:07.571670 | orchestrator | Tuesday 24 March 2026 03:49:52 +0000 (0:00:18.313) 0:01:39.451 ********* 2026-03-24 03:50:07.571674 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:50:07.571678 | orchestrator | changed: [testbed-node-1] 2026-03-24 03:50:07.571681 | orchestrator | changed: [testbed-node-2] 2026-03-24 03:50:07.571685 | orchestrator | 2026-03-24 03:50:07.571689 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:50:07.571694 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 03:50:07.571699 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:50:07.571703 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-24 03:50:07.571706 | orchestrator | 2026-03-24 03:50:07.571710 | orchestrator | 2026-03-24 03:50:07.571714 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:50:07.571718 | orchestrator | Tuesday 24 March 2026 03:50:07 +0000 (0:00:15.050) 0:01:54.501 ********* 2026-03-24 03:50:07.571722 | orchestrator | =============================================================================== 2026-03-24 03:50:07.571725 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.31s 2026-03-24 03:50:07.571729 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.18s 2026-03-24 03:50:07.571738 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.05s 2026-03-24 03:50:07.571742 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.80s 2026-03-24 03:50:07.571746 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.31s 2026-03-24 03:50:07.571750 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.14s 2026-03-24 03:50:07.571753 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.07s 2026-03-24 03:50:07.571768 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.97s 2026-03-24 03:50:07.571772 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.74s 2026-03-24 03:50:07.571776 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.56s 2026-03-24 03:50:07.571780 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.55s 2026-03-24 03:50:07.571783 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.54s 2026-03-24 03:50:07.571787 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.51s 2026-03-24 03:50:07.571791 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.66s 2026-03-24 03:50:07.571795 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.55s 2026-03-24 03:50:07.571798 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.42s 2026-03-24 03:50:07.571802 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.37s 2026-03-24 03:50:07.571806 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.35s 2026-03-24 03:50:07.571809 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.93s 2026-03-24 03:50:07.571817 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.60s 2026-03-24 03:50:08.207461 | orchestrator | ok: Runtime: 1:37:57.987017 2026-03-24 03:50:08.460980 | 2026-03-24 03:50:08.461139 | TASK [Deploy in a nutshell] 2026-03-24 03:50:09.003771 | orchestrator | skipping: Conditional result was False 2026-03-24 03:50:09.028186 | 2026-03-24 03:50:09.028386 | TASK [Bootstrap services] 2026-03-24 03:50:09.753668 | orchestrator | 2026-03-24 03:50:09.753824 | orchestrator | # BOOTSTRAP 2026-03-24 03:50:09.753840 | orchestrator | 2026-03-24 03:50:09.753848 | orchestrator | + set -e 2026-03-24 03:50:09.753855 | orchestrator | + echo 2026-03-24 03:50:09.753863 | orchestrator | + echo '# BOOTSTRAP' 2026-03-24 03:50:09.753874 | orchestrator | + echo 2026-03-24 03:50:09.753899 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-24 03:50:09.761939 | orchestrator | + set -e 2026-03-24 03:50:09.762006 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-24 03:50:11.817087 | orchestrator | 2026-03-24 03:50:11 | INFO  | It takes a moment until task dc0cb419-05d6-4663-9c9b-b5aefa6cee3d (flavor-manager) has been started and output is visible here. 2026-03-24 03:50:18.772147 | orchestrator | 2026-03-24 03:50:14 | INFO  | Flavor SCS-1L-1 created 2026-03-24 03:50:18.772273 | orchestrator | 2026-03-24 03:50:14 | INFO  | Flavor SCS-1L-1-5 created 2026-03-24 03:50:18.772293 | orchestrator | 2026-03-24 03:50:14 | INFO  | Flavor SCS-1V-2 created 2026-03-24 03:50:18.772303 | orchestrator | 2026-03-24 03:50:15 | INFO  | Flavor SCS-1V-2-5 created 2026-03-24 03:50:18.772312 | orchestrator | 2026-03-24 03:50:15 | INFO  | Flavor SCS-1V-4 created 2026-03-24 03:50:18.772321 | orchestrator | 2026-03-24 03:50:15 | INFO  | Flavor SCS-1V-4-10 created 2026-03-24 03:50:18.772330 | orchestrator | 2026-03-24 03:50:15 | INFO  | Flavor SCS-1V-8 created 2026-03-24 03:50:18.772341 | orchestrator | 2026-03-24 03:50:15 | INFO  | Flavor SCS-1V-8-20 created 2026-03-24 03:50:18.772362 | orchestrator | 2026-03-24 03:50:15 | INFO  | Flavor SCS-2V-4 created 2026-03-24 03:50:18.772371 | orchestrator | 2026-03-24 03:50:15 | INFO  | Flavor SCS-2V-4-10 created 2026-03-24 03:50:18.772380 | orchestrator | 2026-03-24 03:50:16 | INFO  | Flavor SCS-2V-8 created 2026-03-24 03:50:18.772390 | orchestrator | 2026-03-24 03:50:16 | INFO  | Flavor SCS-2V-8-20 created 2026-03-24 03:50:18.772398 | orchestrator | 2026-03-24 03:50:16 | INFO  | Flavor SCS-2V-16 created 2026-03-24 03:50:18.772408 | orchestrator | 2026-03-24 03:50:16 | INFO  | Flavor SCS-2V-16-50 created 2026-03-24 03:50:18.772417 | orchestrator | 2026-03-24 03:50:16 | INFO  | Flavor SCS-4V-8 created 2026-03-24 03:50:18.772426 | orchestrator | 2026-03-24 03:50:16 | INFO  | Flavor SCS-4V-8-20 created 2026-03-24 03:50:18.772434 | orchestrator | 2026-03-24 03:50:16 | INFO  | Flavor SCS-4V-16 created 2026-03-24 03:50:18.772442 | orchestrator | 2026-03-24 03:50:17 | INFO  | Flavor SCS-4V-16-50 created 2026-03-24 03:50:18.772452 | orchestrator | 2026-03-24 03:50:17 | INFO  | Flavor SCS-4V-32 created 2026-03-24 03:50:18.772461 | orchestrator | 2026-03-24 03:50:17 | INFO  | Flavor SCS-4V-32-100 created 2026-03-24 03:50:18.772469 | orchestrator | 2026-03-24 03:50:17 | INFO  | Flavor SCS-8V-16 created 2026-03-24 03:50:18.772504 | orchestrator | 2026-03-24 03:50:17 | INFO  | Flavor SCS-8V-16-50 created 2026-03-24 03:50:18.772514 | orchestrator | 2026-03-24 03:50:17 | INFO  | Flavor SCS-8V-32 created 2026-03-24 03:50:18.772527 | orchestrator | 2026-03-24 03:50:17 | INFO  | Flavor SCS-8V-32-100 created 2026-03-24 03:50:18.772536 | orchestrator | 2026-03-24 03:50:18 | INFO  | Flavor SCS-16V-32 created 2026-03-24 03:50:18.772545 | orchestrator | 2026-03-24 03:50:18 | INFO  | Flavor SCS-16V-32-100 created 2026-03-24 03:50:18.772554 | orchestrator | 2026-03-24 03:50:18 | INFO  | Flavor SCS-2V-4-20s created 2026-03-24 03:50:18.772563 | orchestrator | 2026-03-24 03:50:18 | INFO  | Flavor SCS-4V-8-50s created 2026-03-24 03:50:18.772572 | orchestrator | 2026-03-24 03:50:18 | INFO  | Flavor SCS-8V-32-100s created 2026-03-24 03:50:21.015349 | orchestrator | 2026-03-24 03:50:21 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-24 03:50:31.170758 | orchestrator | 2026-03-24 03:50:31 | INFO  | Task bd4196fa-c4b5-4a51-9340-172475d80ccb (bootstrap-basic) was prepared for execution. 2026-03-24 03:50:31.170945 | orchestrator | 2026-03-24 03:50:31 | INFO  | It takes a moment until task bd4196fa-c4b5-4a51-9340-172475d80ccb (bootstrap-basic) has been started and output is visible here. 2026-03-24 03:51:12.132851 | orchestrator | 2026-03-24 03:51:12.132943 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-24 03:51:12.132954 | orchestrator | 2026-03-24 03:51:12.132962 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 03:51:12.132970 | orchestrator | Tuesday 24 March 2026 03:50:35 +0000 (0:00:00.068) 0:00:00.068 ********* 2026-03-24 03:51:12.132977 | orchestrator | ok: [localhost] 2026-03-24 03:51:12.132986 | orchestrator | 2026-03-24 03:51:12.132993 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-24 03:51:12.133000 | orchestrator | Tuesday 24 March 2026 03:50:37 +0000 (0:00:01.730) 0:00:01.798 ********* 2026-03-24 03:51:12.133004 | orchestrator | ok: [localhost] 2026-03-24 03:51:12.133008 | orchestrator | 2026-03-24 03:51:12.133013 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-24 03:51:12.133017 | orchestrator | Tuesday 24 March 2026 03:50:43 +0000 (0:00:06.383) 0:00:08.181 ********* 2026-03-24 03:51:12.133021 | orchestrator | changed: [localhost] 2026-03-24 03:51:12.133026 | orchestrator | 2026-03-24 03:51:12.133032 | orchestrator | TASK [Create public network] *************************************************** 2026-03-24 03:51:12.133040 | orchestrator | Tuesday 24 March 2026 03:50:49 +0000 (0:00:06.060) 0:00:14.242 ********* 2026-03-24 03:51:12.133046 | orchestrator | changed: [localhost] 2026-03-24 03:51:12.133053 | orchestrator | 2026-03-24 03:51:12.133059 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-24 03:51:12.133066 | orchestrator | Tuesday 24 March 2026 03:50:54 +0000 (0:00:05.225) 0:00:19.467 ********* 2026-03-24 03:51:12.133075 | orchestrator | changed: [localhost] 2026-03-24 03:51:12.133082 | orchestrator | 2026-03-24 03:51:12.133088 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-24 03:51:12.133092 | orchestrator | Tuesday 24 March 2026 03:51:00 +0000 (0:00:06.186) 0:00:25.654 ********* 2026-03-24 03:51:12.133095 | orchestrator | changed: [localhost] 2026-03-24 03:51:12.133099 | orchestrator | 2026-03-24 03:51:12.133103 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-24 03:51:12.133109 | orchestrator | Tuesday 24 March 2026 03:51:05 +0000 (0:00:04.251) 0:00:29.905 ********* 2026-03-24 03:51:12.133115 | orchestrator | changed: [localhost] 2026-03-24 03:51:12.133120 | orchestrator | 2026-03-24 03:51:12.133126 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-24 03:51:12.133139 | orchestrator | Tuesday 24 March 2026 03:51:08 +0000 (0:00:03.561) 0:00:33.466 ********* 2026-03-24 03:51:12.133145 | orchestrator | ok: [localhost] 2026-03-24 03:51:12.133151 | orchestrator | 2026-03-24 03:51:12.133156 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:51:12.133162 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 03:51:12.133169 | orchestrator | 2026-03-24 03:51:12.133175 | orchestrator | 2026-03-24 03:51:12.133182 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:51:12.133186 | orchestrator | Tuesday 24 March 2026 03:51:11 +0000 (0:00:03.193) 0:00:36.660 ********* 2026-03-24 03:51:12.133189 | orchestrator | =============================================================================== 2026-03-24 03:51:12.133193 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.38s 2026-03-24 03:51:12.133197 | orchestrator | Set public network to default ------------------------------------------- 6.19s 2026-03-24 03:51:12.133201 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.06s 2026-03-24 03:51:12.133205 | orchestrator | Create public network --------------------------------------------------- 5.23s 2026-03-24 03:51:12.133228 | orchestrator | Create public subnet ---------------------------------------------------- 4.25s 2026-03-24 03:51:12.133232 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.56s 2026-03-24 03:51:12.133236 | orchestrator | Create manager role ----------------------------------------------------- 3.19s 2026-03-24 03:51:12.133240 | orchestrator | Gathering Facts --------------------------------------------------------- 1.73s 2026-03-24 03:51:14.338743 | orchestrator | 2026-03-24 03:51:14 | INFO  | It takes a moment until task 71491a67-018d-4175-b4ce-fb9df7d1d77c (image-manager) has been started and output is visible here. 2026-03-24 03:51:58.528240 | orchestrator | 2026-03-24 03:51:17 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-24 03:51:58.528345 | orchestrator | 2026-03-24 03:51:17 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-24 03:51:58.528356 | orchestrator | 2026-03-24 03:51:17 | INFO  | Importing image Cirros 0.6.2 2026-03-24 03:51:58.528363 | orchestrator | 2026-03-24 03:51:17 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-24 03:51:58.528368 | orchestrator | 2026-03-24 03:51:19 | INFO  | Waiting for import to complete... 2026-03-24 03:51:58.528374 | orchestrator | 2026-03-24 03:51:29 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-24 03:51:58.528380 | orchestrator | 2026-03-24 03:51:30 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-24 03:51:58.528415 | orchestrator | 2026-03-24 03:51:30 | INFO  | Setting internal_version = 0.6.2 2026-03-24 03:51:58.528423 | orchestrator | 2026-03-24 03:51:30 | INFO  | Setting image_original_user = cirros 2026-03-24 03:51:58.528466 | orchestrator | 2026-03-24 03:51:30 | INFO  | Adding tag os:cirros 2026-03-24 03:51:58.528477 | orchestrator | 2026-03-24 03:51:30 | INFO  | Setting property architecture: x86_64 2026-03-24 03:51:58.528486 | orchestrator | 2026-03-24 03:51:30 | INFO  | Setting property hw_disk_bus: scsi 2026-03-24 03:51:58.528494 | orchestrator | 2026-03-24 03:51:30 | INFO  | Setting property hw_rng_model: virtio 2026-03-24 03:51:58.528505 | orchestrator | 2026-03-24 03:51:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-24 03:51:58.528515 | orchestrator | 2026-03-24 03:51:31 | INFO  | Setting property hw_watchdog_action: reset 2026-03-24 03:51:58.528524 | orchestrator | 2026-03-24 03:51:31 | INFO  | Setting property hypervisor_type: qemu 2026-03-24 03:51:58.528533 | orchestrator | 2026-03-24 03:51:31 | INFO  | Setting property os_distro: cirros 2026-03-24 03:51:58.528541 | orchestrator | 2026-03-24 03:51:32 | INFO  | Setting property os_purpose: minimal 2026-03-24 03:51:58.528547 | orchestrator | 2026-03-24 03:51:32 | INFO  | Setting property replace_frequency: never 2026-03-24 03:51:58.528552 | orchestrator | 2026-03-24 03:51:32 | INFO  | Setting property uuid_validity: none 2026-03-24 03:51:58.528558 | orchestrator | 2026-03-24 03:51:32 | INFO  | Setting property provided_until: none 2026-03-24 03:51:58.528563 | orchestrator | 2026-03-24 03:51:32 | INFO  | Setting property image_description: Cirros 2026-03-24 03:51:58.528568 | orchestrator | 2026-03-24 03:51:33 | INFO  | Setting property image_name: Cirros 2026-03-24 03:51:58.528573 | orchestrator | 2026-03-24 03:51:33 | INFO  | Setting property internal_version: 0.6.2 2026-03-24 03:51:58.528578 | orchestrator | 2026-03-24 03:51:33 | INFO  | Setting property image_original_user: cirros 2026-03-24 03:51:58.528583 | orchestrator | 2026-03-24 03:51:33 | INFO  | Setting property os_version: 0.6.2 2026-03-24 03:51:58.528605 | orchestrator | 2026-03-24 03:51:34 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-24 03:51:58.528619 | orchestrator | 2026-03-24 03:51:34 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-24 03:51:58.528624 | orchestrator | 2026-03-24 03:51:34 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-24 03:51:58.528629 | orchestrator | 2026-03-24 03:51:34 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-24 03:51:58.528634 | orchestrator | 2026-03-24 03:51:34 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-24 03:51:58.528640 | orchestrator | 2026-03-24 03:51:35 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-24 03:51:58.528645 | orchestrator | 2026-03-24 03:51:35 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-24 03:51:58.528653 | orchestrator | 2026-03-24 03:51:35 | INFO  | Importing image Cirros 0.6.3 2026-03-24 03:51:58.528659 | orchestrator | 2026-03-24 03:51:35 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-24 03:51:58.528664 | orchestrator | 2026-03-24 03:51:36 | INFO  | Waiting for image to leave queued state... 2026-03-24 03:51:58.528669 | orchestrator | 2026-03-24 03:51:41 | INFO  | Waiting for import to complete... 2026-03-24 03:51:58.528674 | orchestrator | 2026-03-24 03:51:52 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-24 03:51:58.528692 | orchestrator | 2026-03-24 03:51:52 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-24 03:51:58.528697 | orchestrator | 2026-03-24 03:51:52 | INFO  | Setting internal_version = 0.6.3 2026-03-24 03:51:58.528702 | orchestrator | 2026-03-24 03:51:52 | INFO  | Setting image_original_user = cirros 2026-03-24 03:51:58.528707 | orchestrator | 2026-03-24 03:51:52 | INFO  | Adding tag os:cirros 2026-03-24 03:51:58.528713 | orchestrator | 2026-03-24 03:51:52 | INFO  | Setting property architecture: x86_64 2026-03-24 03:51:58.528718 | orchestrator | 2026-03-24 03:51:53 | INFO  | Setting property hw_disk_bus: scsi 2026-03-24 03:51:58.528723 | orchestrator | 2026-03-24 03:51:53 | INFO  | Setting property hw_rng_model: virtio 2026-03-24 03:51:58.528728 | orchestrator | 2026-03-24 03:51:53 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-24 03:51:58.528733 | orchestrator | 2026-03-24 03:51:53 | INFO  | Setting property hw_watchdog_action: reset 2026-03-24 03:51:58.528738 | orchestrator | 2026-03-24 03:51:54 | INFO  | Setting property hypervisor_type: qemu 2026-03-24 03:51:58.528743 | orchestrator | 2026-03-24 03:51:54 | INFO  | Setting property os_distro: cirros 2026-03-24 03:51:58.528748 | orchestrator | 2026-03-24 03:51:54 | INFO  | Setting property os_purpose: minimal 2026-03-24 03:51:58.528753 | orchestrator | 2026-03-24 03:51:54 | INFO  | Setting property replace_frequency: never 2026-03-24 03:51:58.528758 | orchestrator | 2026-03-24 03:51:55 | INFO  | Setting property uuid_validity: none 2026-03-24 03:51:58.528763 | orchestrator | 2026-03-24 03:51:55 | INFO  | Setting property provided_until: none 2026-03-24 03:51:58.528768 | orchestrator | 2026-03-24 03:51:55 | INFO  | Setting property image_description: Cirros 2026-03-24 03:51:58.528773 | orchestrator | 2026-03-24 03:51:55 | INFO  | Setting property image_name: Cirros 2026-03-24 03:51:58.528778 | orchestrator | 2026-03-24 03:51:56 | INFO  | Setting property internal_version: 0.6.3 2026-03-24 03:51:58.528785 | orchestrator | 2026-03-24 03:51:56 | INFO  | Setting property image_original_user: cirros 2026-03-24 03:51:58.528795 | orchestrator | 2026-03-24 03:51:56 | INFO  | Setting property os_version: 0.6.3 2026-03-24 03:51:58.528801 | orchestrator | 2026-03-24 03:51:56 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-24 03:51:58.528807 | orchestrator | 2026-03-24 03:51:57 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-24 03:51:58.528813 | orchestrator | 2026-03-24 03:51:57 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-24 03:51:58.528819 | orchestrator | 2026-03-24 03:51:57 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-24 03:51:58.528825 | orchestrator | 2026-03-24 03:51:57 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-24 03:51:58.802386 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-24 03:52:01.082988 | orchestrator | 2026-03-24 03:52:01 | INFO  | date: 2026-03-24 2026-03-24 03:52:01.083094 | orchestrator | 2026-03-24 03:52:01 | INFO  | image: octavia-amphora-haproxy-2024.2.20260324.qcow2 2026-03-24 03:52:01.083121 | orchestrator | 2026-03-24 03:52:01 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260324.qcow2 2026-03-24 03:52:01.083128 | orchestrator | 2026-03-24 03:52:01 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260324.qcow2.CHECKSUM 2026-03-24 03:52:01.249063 | orchestrator | 2026-03-24 03:52:01 | INFO  | checksum: 3494fb6e77cd8b9c39c1d9a5ff370e41debcdbef3616281e66489f30bad10abd 2026-03-24 03:52:01.327963 | orchestrator | 2026-03-24 03:52:01 | INFO  | It takes a moment until task a5897efa-ed68-4bcc-8976-3371754b0f4a (image-manager) has been started and output is visible here. 2026-03-24 03:53:03.605479 | orchestrator | 2026-03-24 03:52:03 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-24' 2026-03-24 03:53:03.605588 | orchestrator | 2026-03-24 03:52:03 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260324.qcow2: 200 2026-03-24 03:53:03.605602 | orchestrator | 2026-03-24 03:52:03 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-24 2026-03-24 03:53:03.605611 | orchestrator | 2026-03-24 03:52:03 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260324.qcow2 2026-03-24 03:53:03.605620 | orchestrator | 2026-03-24 03:52:05 | INFO  | Waiting for image to leave queued state... 2026-03-24 03:53:03.605628 | orchestrator | 2026-03-24 03:52:07 | INFO  | Waiting for import to complete... 2026-03-24 03:53:03.605636 | orchestrator | 2026-03-24 03:52:17 | INFO  | Waiting for import to complete... 2026-03-24 03:53:03.605643 | orchestrator | 2026-03-24 03:52:27 | INFO  | Waiting for import to complete... 2026-03-24 03:53:03.605651 | orchestrator | 2026-03-24 03:52:37 | INFO  | Waiting for import to complete... 2026-03-24 03:53:03.605660 | orchestrator | 2026-03-24 03:52:47 | INFO  | Waiting for import to complete... 2026-03-24 03:53:03.605668 | orchestrator | 2026-03-24 03:52:57 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-24' successfully completed, reloading images 2026-03-24 03:53:03.605677 | orchestrator | 2026-03-24 03:52:58 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-24' 2026-03-24 03:53:03.605686 | orchestrator | 2026-03-24 03:52:58 | INFO  | Setting internal_version = 2026-03-24 2026-03-24 03:53:03.605694 | orchestrator | 2026-03-24 03:52:58 | INFO  | Setting image_original_user = ubuntu 2026-03-24 03:53:03.605725 | orchestrator | 2026-03-24 03:52:58 | INFO  | Adding tag amphora 2026-03-24 03:53:03.605735 | orchestrator | 2026-03-24 03:52:58 | INFO  | Adding tag os:ubuntu 2026-03-24 03:53:03.605742 | orchestrator | 2026-03-24 03:52:58 | INFO  | Setting property architecture: x86_64 2026-03-24 03:53:03.605749 | orchestrator | 2026-03-24 03:52:59 | INFO  | Setting property hw_disk_bus: scsi 2026-03-24 03:53:03.605757 | orchestrator | 2026-03-24 03:52:59 | INFO  | Setting property hw_rng_model: virtio 2026-03-24 03:53:03.605765 | orchestrator | 2026-03-24 03:52:59 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-24 03:53:03.605773 | orchestrator | 2026-03-24 03:52:59 | INFO  | Setting property hw_watchdog_action: reset 2026-03-24 03:53:03.605780 | orchestrator | 2026-03-24 03:53:00 | INFO  | Setting property hypervisor_type: qemu 2026-03-24 03:53:03.605788 | orchestrator | 2026-03-24 03:53:00 | INFO  | Setting property os_distro: ubuntu 2026-03-24 03:53:03.605796 | orchestrator | 2026-03-24 03:53:00 | INFO  | Setting property replace_frequency: quarterly 2026-03-24 03:53:03.605803 | orchestrator | 2026-03-24 03:53:00 | INFO  | Setting property uuid_validity: last-1 2026-03-24 03:53:03.605811 | orchestrator | 2026-03-24 03:53:01 | INFO  | Setting property provided_until: none 2026-03-24 03:53:03.605819 | orchestrator | 2026-03-24 03:53:01 | INFO  | Setting property os_purpose: network 2026-03-24 03:53:03.605827 | orchestrator | 2026-03-24 03:53:01 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-24 03:53:03.605849 | orchestrator | 2026-03-24 03:53:01 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-24 03:53:03.605857 | orchestrator | 2026-03-24 03:53:01 | INFO  | Setting property internal_version: 2026-03-24 2026-03-24 03:53:03.605865 | orchestrator | 2026-03-24 03:53:02 | INFO  | Setting property image_original_user: ubuntu 2026-03-24 03:53:03.605873 | orchestrator | 2026-03-24 03:53:02 | INFO  | Setting property os_version: 2026-03-24 2026-03-24 03:53:03.605881 | orchestrator | 2026-03-24 03:53:02 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260324.qcow2 2026-03-24 03:53:03.605889 | orchestrator | 2026-03-24 03:53:02 | INFO  | Setting property image_build_date: 2026-03-24 2026-03-24 03:53:03.605897 | orchestrator | 2026-03-24 03:53:03 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-24' 2026-03-24 03:53:03.605905 | orchestrator | 2026-03-24 03:53:03 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-24' 2026-03-24 03:53:03.605913 | orchestrator | 2026-03-24 03:53:03 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-24 03:53:03.605936 | orchestrator | 2026-03-24 03:53:03 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-24 03:53:03.605946 | orchestrator | 2026-03-24 03:53:03 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-24 03:53:03.605954 | orchestrator | 2026-03-24 03:53:03 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-24 03:53:04.194521 | orchestrator | ok: Runtime: 0:02:54.509371 2026-03-24 03:53:04.214552 | 2026-03-24 03:53:04.214719 | TASK [Run checks] 2026-03-24 03:53:04.967312 | orchestrator | + set -e 2026-03-24 03:53:04.967632 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 03:53:04.967666 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 03:53:04.967696 | orchestrator | ++ INTERACTIVE=false 2026-03-24 03:53:04.967713 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 03:53:04.967728 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 03:53:04.967759 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-24 03:53:04.968831 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-24 03:53:04.975033 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 03:53:04.975127 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 03:53:04.975143 | orchestrator | + echo 2026-03-24 03:53:04.975154 | orchestrator | 2026-03-24 03:53:04.975165 | orchestrator | # CHECK 2026-03-24 03:53:04.975175 | orchestrator | 2026-03-24 03:53:04.975195 | orchestrator | + echo '# CHECK' 2026-03-24 03:53:04.975206 | orchestrator | + echo 2026-03-24 03:53:04.975232 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-24 03:53:04.976905 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-24 03:53:05.057627 | orchestrator | 2026-03-24 03:53:05.057773 | orchestrator | ## Containers @ testbed-manager 2026-03-24 03:53:05.057803 | orchestrator | 2026-03-24 03:53:05.057827 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-24 03:53:05.057847 | orchestrator | + echo 2026-03-24 03:53:05.057866 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-24 03:53:05.057886 | orchestrator | + echo 2026-03-24 03:53:05.057904 | orchestrator | + osism container testbed-manager ps 2026-03-24 03:53:06.895214 | orchestrator | 2026-03-24 03:53:06 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-24 03:53:07.285251 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-24 03:53:07.285359 | orchestrator | e7c9d762a371 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_blackbox_exporter 2026-03-24 03:53:07.285375 | orchestrator | 3f06c1fb1c09 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_alertmanager 2026-03-24 03:53:07.285382 | orchestrator | 5d950aad8beb registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-24 03:53:07.285389 | orchestrator | cb00cb598bf9 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-03-24 03:53:07.285444 | orchestrator | 2f47718ab27a registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_server 2026-03-24 03:53:07.285454 | orchestrator | d19cfffd67fb registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 56 minutes ago Up 55 minutes cephclient 2026-03-24 03:53:07.285460 | orchestrator | b7f69a8d0ea0 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-24 03:53:07.285466 | orchestrator | aea7ec49bab9 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-24 03:53:07.285494 | orchestrator | 0dd2d3076898 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-24 03:53:07.285500 | orchestrator | 0667beffc72d registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-24 03:53:07.285506 | orchestrator | 751d1749a75a phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-24 03:53:07.285511 | orchestrator | 35273dcb58d6 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-24 03:53:07.285517 | orchestrator | 04f704470fa0 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-24 03:53:07.285523 | orchestrator | 7b4531a4d1db registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-24 03:53:07.285545 | orchestrator | e5726f2a410c registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-24 03:53:07.285557 | orchestrator | 49653e0ca4d4 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-24 03:53:07.285563 | orchestrator | a0f89a132bdb registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-24 03:53:07.285569 | orchestrator | 7ac6ee1be083 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-24 03:53:07.285575 | orchestrator | 483e2bee6ebf registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-24 03:53:07.285581 | orchestrator | 8e8561d94fc4 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-24 03:53:07.285587 | orchestrator | c7e8c912abd0 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-24 03:53:07.285593 | orchestrator | 86cdd3e09575 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-24 03:53:07.285604 | orchestrator | 7314809ea568 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-24 03:53:07.285610 | orchestrator | 97eab1ec4abe registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-24 03:53:07.285616 | orchestrator | 9e303f80207f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-24 03:53:07.285622 | orchestrator | ed92bbfa7f31 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-24 03:53:07.285628 | orchestrator | d17190b92164 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-24 03:53:07.285634 | orchestrator | 7e262d7d048b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-24 03:53:07.285640 | orchestrator | 29a0506d85af registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-24 03:53:07.285649 | orchestrator | 3ea7d9510563 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-24 03:53:07.558937 | orchestrator | 2026-03-24 03:53:07.559034 | orchestrator | ## Images @ testbed-manager 2026-03-24 03:53:07.559049 | orchestrator | 2026-03-24 03:53:07.559057 | orchestrator | + echo 2026-03-24 03:53:07.559066 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-24 03:53:07.559074 | orchestrator | + echo 2026-03-24 03:53:07.559086 | orchestrator | + osism container testbed-manager images 2026-03-24 03:53:09.841527 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-24 03:53:09.841641 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 d5038a5072a6 24 hours ago 239MB 2026-03-24 03:53:09.841658 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 7 weeks ago 41.4MB 2026-03-24 03:53:09.841670 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-24 03:53:09.841681 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-24 03:53:09.841692 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-24 03:53:09.841703 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-24 03:53:09.841714 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-24 03:53:09.841728 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-24 03:53:09.841739 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-24 03:53:09.841782 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-24 03:53:09.841794 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-24 03:53:09.841804 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-24 03:53:09.841815 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-24 03:53:09.841826 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-24 03:53:09.841837 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-24 03:53:09.841848 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-24 03:53:09.841859 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-24 03:53:09.841870 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-24 03:53:09.841881 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-24 03:53:09.841921 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-24 03:53:09.841933 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-24 03:53:09.841944 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-24 03:53:09.841954 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-24 03:53:09.841965 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-24 03:53:09.841976 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-24 03:53:10.112741 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-24 03:53:10.112889 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-24 03:53:10.159451 | orchestrator | 2026-03-24 03:53:10.159570 | orchestrator | ## Containers @ testbed-node-0 2026-03-24 03:53:10.159581 | orchestrator | 2026-03-24 03:53:10.159587 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-24 03:53:10.159593 | orchestrator | + echo 2026-03-24 03:53:10.159598 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-24 03:53:10.159605 | orchestrator | + echo 2026-03-24 03:53:10.159610 | orchestrator | + osism container testbed-node-0 ps 2026-03-24 03:53:12.453081 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-24 03:53:12.453163 | orchestrator | e9a3858031ca registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-24 03:53:12.453189 | orchestrator | d6f5d8805942 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-24 03:53:12.453194 | orchestrator | cd090e034e28 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-24 03:53:12.453199 | orchestrator | 3df7d1a20be1 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_elasticsearch_exporter 2026-03-24 03:53:12.453223 | orchestrator | d0fa47d15cbc registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-24 03:53:12.453227 | orchestrator | ddc86eb840b8 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-24 03:53:12.453236 | orchestrator | f9db0c1660c1 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-03-24 03:53:12.453240 | orchestrator | 44f263140f31 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-03-24 03:53:12.453244 | orchestrator | b162a68dc37c registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-03-24 03:53:12.453248 | orchestrator | 16c02a8b183d registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-03-24 03:53:12.453251 | orchestrator | fc1946069812 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-03-24 03:53:12.453255 | orchestrator | 5864f96aced9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-24 03:53:12.453259 | orchestrator | 2b618c50355f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) aodh_notifier 2026-03-24 03:53:12.453263 | orchestrator | faaf1aa10722 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) aodh_listener 2026-03-24 03:53:12.453267 | orchestrator | c01294fdde69 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-24 03:53:12.453270 | orchestrator | a40c85fe91f6 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-03-24 03:53:12.453274 | orchestrator | 75ff9b2007de registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes ceilometer_central 2026-03-24 03:53:12.453278 | orchestrator | a10618d9a3d1 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) ceilometer_notification 2026-03-24 03:53:12.453282 | orchestrator | fcb5f744cf82 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-03-24 03:53:12.453299 | orchestrator | 0b67e7b5e611 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-03-24 03:53:12.453304 | orchestrator | 01dd6009ea6a registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-03-24 03:53:12.453308 | orchestrator | 018b5f95ffb8 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes octavia_driver_agent 2026-03-24 03:53:12.453315 | orchestrator | 114bc1415064 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_api 2026-03-24 03:53:12.453319 | orchestrator | 16f93ec83a77 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-03-24 03:53:12.453323 | orchestrator | d07f70ad9d52 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-03-24 03:53:12.453330 | orchestrator | bd900e88f520 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-03-24 03:53:12.453334 | orchestrator | 0126f657425e registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-03-24 03:53:12.453338 | orchestrator | 2e2b2f0228b1 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-03-24 03:53:12.453341 | orchestrator | b50cdee74ed5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_backend_bind9 2026-03-24 03:53:12.453345 | orchestrator | 9ac97f706b29 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-03-24 03:53:12.453349 | orchestrator | bb77fa4db1ff registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-03-24 03:53:12.453353 | orchestrator | 6b2f6aea65be registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_api 2026-03-24 03:53:12.453357 | orchestrator | 037df4797613 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_backup 2026-03-24 03:53:12.453361 | orchestrator | e58330799efc registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_volume 2026-03-24 03:53:12.453364 | orchestrator | 0deebe9f7214 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-03-24 03:53:12.453368 | orchestrator | 578bf60b2af3 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-03-24 03:53:12.453372 | orchestrator | 4e57017dd71b registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) glance_api 2026-03-24 03:53:12.453376 | orchestrator | eca16825099c registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_console 2026-03-24 03:53:12.454063 | orchestrator | b4aacd51b0bf registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_apiserver 2026-03-24 03:53:12.454072 | orchestrator | 9bc001535e3a registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) horizon 2026-03-24 03:53:12.454083 | orchestrator | 8ff19c37553d registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) nova_novncproxy 2026-03-24 03:53:12.454088 | orchestrator | bc828fccdb9f registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) nova_conductor 2026-03-24 03:53:12.454097 | orchestrator | 5a262c871438 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_api 2026-03-24 03:53:12.454101 | orchestrator | 6a60ec536bf0 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_scheduler 2026-03-24 03:53:12.454114 | orchestrator | 182962893e3d registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) neutron_server 2026-03-24 03:53:12.454119 | orchestrator | e271daf82bac registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) placement_api 2026-03-24 03:53:12.454123 | orchestrator | f0a39dce334a registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) keystone 2026-03-24 03:53:12.454128 | orchestrator | 1a38ecf7a245 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_fernet 2026-03-24 03:53:12.454137 | orchestrator | ab28829470af registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_ssh 2026-03-24 03:53:12.454141 | orchestrator | 7a809afa390b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 54 minutes ago Up 54 minutes ceph-mgr-testbed-node-0 2026-03-24 03:53:12.454146 | orchestrator | 06c334919366 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-24 03:53:12.454150 | orchestrator | cefde431640e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-24 03:53:12.454154 | orchestrator | 35a2677f73b0 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-24 03:53:12.454159 | orchestrator | e79aecd4c4e0 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-24 03:53:12.454163 | orchestrator | a194e81d17b4 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-24 03:53:12.454168 | orchestrator | 47316dc6ba98 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-24 03:53:12.454175 | orchestrator | e880da5af516 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-24 03:53:12.454180 | orchestrator | f11f03c3f0ec registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-24 03:53:12.454192 | orchestrator | ecdc5d29ebcc registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-24 03:53:12.454196 | orchestrator | 4eaa26fbea3f registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-24 03:53:12.454200 | orchestrator | fe6ac715d1f9 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-24 03:53:12.454204 | orchestrator | 6c07dfa07e97 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-24 03:53:12.454208 | orchestrator | fea0dc747113 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-24 03:53:12.454211 | orchestrator | c8696bc887c1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-24 03:53:12.454215 | orchestrator | 13d687f884fa registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-24 03:53:12.454219 | orchestrator | 40ca51ac5f53 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-03-24 03:53:12.454223 | orchestrator | 51abbaf0ad65 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-03-24 03:53:12.454226 | orchestrator | b2e3fa7c37eb registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-03-24 03:53:12.454230 | orchestrator | 309e92441e76 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-24 03:53:12.454234 | orchestrator | 2dfe289cf9de registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-24 03:53:12.454238 | orchestrator | 3d4f63ec7dd5 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-24 03:53:12.720227 | orchestrator | 2026-03-24 03:53:12.720307 | orchestrator | ## Images @ testbed-node-0 2026-03-24 03:53:12.720319 | orchestrator | 2026-03-24 03:53:12.720325 | orchestrator | + echo 2026-03-24 03:53:12.720331 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-24 03:53:12.720338 | orchestrator | + echo 2026-03-24 03:53:12.720343 | orchestrator | + osism container testbed-node-0 images 2026-03-24 03:53:14.968292 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-24 03:53:14.968488 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-24 03:53:14.968504 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-24 03:53:14.968516 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-24 03:53:14.968529 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-24 03:53:14.968566 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-24 03:53:14.968576 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-24 03:53:14.968586 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-24 03:53:14.968596 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-24 03:53:14.968612 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-24 03:53:14.968620 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-24 03:53:14.968628 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-24 03:53:14.968637 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-24 03:53:14.968646 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-24 03:53:14.968654 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-24 03:53:14.968662 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-24 03:53:14.968671 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-24 03:53:14.968680 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-24 03:53:14.968689 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-24 03:53:14.968698 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-24 03:53:14.968706 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-24 03:53:14.968715 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-24 03:53:14.968724 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-24 03:53:14.968733 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-24 03:53:14.968742 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-24 03:53:14.968751 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-24 03:53:14.968761 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-24 03:53:14.968769 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-24 03:53:14.968793 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-24 03:53:14.968802 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-24 03:53:14.968809 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-24 03:53:14.968824 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-24 03:53:14.968858 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-24 03:53:14.968867 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-24 03:53:14.968874 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-24 03:53:14.968882 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-24 03:53:14.968889 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-24 03:53:14.968903 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-24 03:53:14.968913 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-24 03:53:14.968921 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-24 03:53:14.968929 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-24 03:53:14.968938 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-24 03:53:14.968946 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-24 03:53:14.968955 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-24 03:53:14.968963 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-24 03:53:14.968972 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-24 03:53:14.968980 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-24 03:53:14.968989 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-24 03:53:14.968998 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-24 03:53:14.969005 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-24 03:53:14.969014 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-24 03:53:14.969021 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-24 03:53:14.969029 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-24 03:53:14.969037 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-24 03:53:14.969044 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-24 03:53:14.969053 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-24 03:53:14.969061 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-24 03:53:14.969076 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-24 03:53:14.969083 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-24 03:53:14.969096 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-24 03:53:14.969105 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-24 03:53:14.969112 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-24 03:53:14.969120 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-24 03:53:14.969135 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-24 03:53:14.969154 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-24 03:53:14.969163 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-24 03:53:14.969172 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-24 03:53:14.969180 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-24 03:53:14.969189 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-24 03:53:14.969198 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-24 03:53:15.266064 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-24 03:53:15.266157 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-24 03:53:15.319279 | orchestrator | 2026-03-24 03:53:15.319362 | orchestrator | ## Containers @ testbed-node-1 2026-03-24 03:53:15.319376 | orchestrator | 2026-03-24 03:53:15.319384 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-24 03:53:15.319412 | orchestrator | + echo 2026-03-24 03:53:15.319420 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-24 03:53:15.319428 | orchestrator | + echo 2026-03-24 03:53:15.319435 | orchestrator | + osism container testbed-node-1 ps 2026-03-24 03:53:17.699113 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-24 03:53:17.699201 | orchestrator | 09c83b3fd62d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-24 03:53:17.699214 | orchestrator | 3401913a1c9e registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-24 03:53:17.699223 | orchestrator | 95307c3025a4 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-24 03:53:17.699231 | orchestrator | 83540e33905c registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_elasticsearch_exporter 2026-03-24 03:53:17.699244 | orchestrator | 65bff2630beb registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-24 03:53:17.699257 | orchestrator | ffc8aeeb3ec5 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-24 03:53:17.699295 | orchestrator | 903d0c156328 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-03-24 03:53:17.699309 | orchestrator | 6bbc8c960025 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-03-24 03:53:17.699322 | orchestrator | b9451f21d56a registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-03-24 03:53:17.699335 | orchestrator | f01c262e39f9 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-03-24 03:53:17.699346 | orchestrator | e766b531a9a5 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-03-24 03:53:17.699358 | orchestrator | 3359a162dbd4 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-24 03:53:17.699382 | orchestrator | c087c7b643e7 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) aodh_notifier 2026-03-24 03:53:17.699566 | orchestrator | cbcdfcc5670b registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-24 03:53:17.699583 | orchestrator | 8cca35966eab registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-24 03:53:17.699595 | orchestrator | 58dfa6862834 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-03-24 03:53:17.699603 | orchestrator | a63584712d19 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes ceilometer_central 2026-03-24 03:53:17.699611 | orchestrator | 2e4f52ddc116 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) ceilometer_notification 2026-03-24 03:53:17.699619 | orchestrator | 7b05957fb448 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-03-24 03:53:17.699647 | orchestrator | e33af2b7b68d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-03-24 03:53:17.699656 | orchestrator | 52f2395667d9 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-03-24 03:53:17.699664 | orchestrator | 9909c1dfa343 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes octavia_driver_agent 2026-03-24 03:53:17.699672 | orchestrator | 585d3fa0467f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_api 2026-03-24 03:53:17.699690 | orchestrator | daaa86852e53 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-03-24 03:53:17.699698 | orchestrator | 7d2aa7ddcf0a registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-03-24 03:53:17.699706 | orchestrator | 8877988073bc registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-03-24 03:53:17.699714 | orchestrator | f4e4e05d0a00 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-03-24 03:53:17.699722 | orchestrator | 4d3a1b0d75db registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-03-24 03:53:17.699730 | orchestrator | 877d7be1e733 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_backend_bind9 2026-03-24 03:53:17.699738 | orchestrator | 375e780e1f99 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-03-24 03:53:17.699746 | orchestrator | 9e6e06ba0ec4 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-03-24 03:53:17.699754 | orchestrator | 38a1de60563f registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_api 2026-03-24 03:53:17.699762 | orchestrator | 5809a66f7387 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_backup 2026-03-24 03:53:17.699770 | orchestrator | a9f2471107c2 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_volume 2026-03-24 03:53:17.699777 | orchestrator | 3987f3531083 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-03-24 03:53:17.699785 | orchestrator | 5d11b737dac8 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-03-24 03:53:17.699793 | orchestrator | 68c846694340 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) glance_api 2026-03-24 03:53:17.699808 | orchestrator | 36ecb6f6b121 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_console 2026-03-24 03:53:17.699816 | orchestrator | 05c6787e9499 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_apiserver 2026-03-24 03:53:17.699830 | orchestrator | 5d20e73432eb registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) horizon 2026-03-24 03:53:17.699839 | orchestrator | 9eadeaea2a3c registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) nova_novncproxy 2026-03-24 03:53:17.699852 | orchestrator | 9ce2132832d7 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_conductor 2026-03-24 03:53:17.699860 | orchestrator | cca960e8ccf0 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_api 2026-03-24 03:53:17.699868 | orchestrator | 71a29bc1679c registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_scheduler 2026-03-24 03:53:17.699876 | orchestrator | 54cddb75312e registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) neutron_server 2026-03-24 03:53:17.699991 | orchestrator | e752d01cca7d registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) placement_api 2026-03-24 03:53:17.700007 | orchestrator | 53607490017c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 51 minutes (healthy) keystone 2026-03-24 03:53:17.700020 | orchestrator | 2ab51c0ba83d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_fernet 2026-03-24 03:53:17.700031 | orchestrator | ef08ff1c5ddf registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_ssh 2026-03-24 03:53:17.700042 | orchestrator | 9d8aa1b131fd registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 54 minutes ago Up 54 minutes ceph-mgr-testbed-node-1 2026-03-24 03:53:17.700056 | orchestrator | 4e2dbf00f15b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-24 03:53:17.700067 | orchestrator | 4f8b0ade79f3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-24 03:53:17.700080 | orchestrator | 816faf2c28b6 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-24 03:53:17.700093 | orchestrator | 3c31da216dcb registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-24 03:53:17.700106 | orchestrator | 0ec045e86e6b registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-24 03:53:17.700118 | orchestrator | f4af17ae1367 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-24 03:53:17.700130 | orchestrator | f0bb3112b24a registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-24 03:53:17.700141 | orchestrator | d170a60bba62 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-24 03:53:17.700152 | orchestrator | 42868f846058 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-24 03:53:17.700174 | orchestrator | 1cb358dac479 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-24 03:53:17.700185 | orchestrator | 9465686b9b21 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-24 03:53:17.700197 | orchestrator | 711305ca2e0e registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-24 03:53:17.700210 | orchestrator | 85b6c62c60fb registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-24 03:53:17.700221 | orchestrator | 01cebec931a1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-24 03:53:17.700232 | orchestrator | 673b0892bdaa registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-24 03:53:17.700260 | orchestrator | 387f7e822874 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-03-24 03:53:17.700281 | orchestrator | 8393933e68f7 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-03-24 03:53:17.700294 | orchestrator | 727220f931c8 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-03-24 03:53:17.700306 | orchestrator | 00d68c2908fa registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-24 03:53:17.700322 | orchestrator | d992a8924a5d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-24 03:53:17.700336 | orchestrator | 0970e08cf87f registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-24 03:53:18.063532 | orchestrator | 2026-03-24 03:53:18.063616 | orchestrator | ## Images @ testbed-node-1 2026-03-24 03:53:18.063627 | orchestrator | 2026-03-24 03:53:18.063633 | orchestrator | + echo 2026-03-24 03:53:18.063640 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-24 03:53:18.063647 | orchestrator | + echo 2026-03-24 03:53:18.063654 | orchestrator | + osism container testbed-node-1 images 2026-03-24 03:53:20.418536 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-24 03:53:20.418630 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-24 03:53:20.418639 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-24 03:53:20.418647 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-24 03:53:20.418654 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-24 03:53:20.418660 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-24 03:53:20.418687 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-24 03:53:20.418694 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-24 03:53:20.418700 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-24 03:53:20.418706 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-24 03:53:20.418712 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-24 03:53:20.418718 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-24 03:53:20.418724 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-24 03:53:20.418730 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-24 03:53:20.418736 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-24 03:53:20.418742 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-24 03:53:20.418747 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-24 03:53:20.418753 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-24 03:53:20.418759 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-24 03:53:20.418764 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-24 03:53:20.418770 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-24 03:53:20.418775 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-24 03:53:20.418781 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-24 03:53:20.418787 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-24 03:53:20.418793 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-24 03:53:20.418800 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-24 03:53:20.418805 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-24 03:53:20.418811 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-24 03:53:20.418817 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-24 03:53:20.418823 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-24 03:53:20.418829 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-24 03:53:20.418835 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-24 03:53:20.418858 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-24 03:53:20.418872 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-24 03:53:20.418878 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-24 03:53:20.418884 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-24 03:53:20.418890 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-24 03:53:20.418896 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-24 03:53:20.418902 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-24 03:53:20.418908 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-24 03:53:20.418928 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-24 03:53:20.418934 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-24 03:53:20.418940 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-24 03:53:20.418945 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-24 03:53:20.418951 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-24 03:53:20.418957 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-24 03:53:20.418962 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-24 03:53:20.418968 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-24 03:53:20.418973 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-24 03:53:20.418979 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-24 03:53:20.418986 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-24 03:53:20.418991 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-24 03:53:20.418997 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-24 03:53:20.419004 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-24 03:53:20.419010 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-24 03:53:20.419016 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-24 03:53:20.419022 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-24 03:53:20.419028 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-24 03:53:20.419033 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-24 03:53:20.419044 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-24 03:53:20.419054 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-24 03:53:20.419060 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-24 03:53:20.419066 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-24 03:53:20.419073 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-24 03:53:20.419086 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-24 03:53:20.419094 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-24 03:53:20.419100 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-24 03:53:20.419107 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-24 03:53:20.419113 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-24 03:53:20.419120 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-24 03:53:20.835915 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-24 03:53:20.836119 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-24 03:53:20.894997 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-24 03:53:20.895190 | orchestrator | 2026-03-24 03:53:20.895203 | orchestrator | ## Containers @ testbed-node-2 2026-03-24 03:53:20.895210 | orchestrator | 2026-03-24 03:53:20.895216 | orchestrator | + echo 2026-03-24 03:53:20.895222 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-24 03:53:20.895228 | orchestrator | + echo 2026-03-24 03:53:20.895233 | orchestrator | + osism container testbed-node-2 ps 2026-03-24 03:53:23.267642 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-24 03:53:23.267727 | orchestrator | 1702c5b0a17b registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-24 03:53:23.267738 | orchestrator | cf721f967980 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-24 03:53:23.267745 | orchestrator | 95cf43744363 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-24 03:53:23.267753 | orchestrator | 5efc018f2b83 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-24 03:53:23.267761 | orchestrator | 520415d8a84e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-24 03:53:23.267768 | orchestrator | bc0133283f13 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-24 03:53:23.267775 | orchestrator | 7ab4e168c619 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-03-24 03:53:23.267804 | orchestrator | 6ab0af983c91 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-03-24 03:53:23.267811 | orchestrator | 62b68c6fda54 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-03-24 03:53:23.267818 | orchestrator | dd634e3e2724 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-03-24 03:53:23.267825 | orchestrator | 5a2716d6e24b registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-03-24 03:53:23.267832 | orchestrator | 3ba6f94ae8e9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-03-24 03:53:23.267839 | orchestrator | 893119221dce registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) aodh_notifier 2026-03-24 03:53:23.267846 | orchestrator | b6a178719839 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-03-24 03:53:23.267860 | orchestrator | c41bbc10a71b registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-03-24 03:53:23.267866 | orchestrator | fb2c28147f80 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-03-24 03:53:23.267873 | orchestrator | ff9b53ea560c registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes ceilometer_central 2026-03-24 03:53:23.267879 | orchestrator | ccfcfaecd182 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-03-24 03:53:23.267885 | orchestrator | bf4e10a5f046 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-03-24 03:53:23.267906 | orchestrator | ebdad6a11b87 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-03-24 03:53:23.267913 | orchestrator | 09a55d70dcaf registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-03-24 03:53:23.267919 | orchestrator | 9e82c524832b registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes octavia_driver_agent 2026-03-24 03:53:23.267925 | orchestrator | 8b2882b37c21 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_api 2026-03-24 03:53:23.267932 | orchestrator | 695e046ed254 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-03-24 03:53:23.267938 | orchestrator | 81bed91cdb71 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-03-24 03:53:23.267950 | orchestrator | ef571288b0f8 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-03-24 03:53:23.267957 | orchestrator | a8269773a231 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-03-24 03:53:23.267967 | orchestrator | 1074230f5be5 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-03-24 03:53:23.267976 | orchestrator | 1d80c2ad1364 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_backend_bind9 2026-03-24 03:53:23.267983 | orchestrator | 9e53c82e5583 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-03-24 03:53:23.267990 | orchestrator | fc8e55e45c57 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-03-24 03:53:23.267997 | orchestrator | a36e98049774 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_api 2026-03-24 03:53:23.268007 | orchestrator | bc1a52e7ebed registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_backup 2026-03-24 03:53:23.268014 | orchestrator | 3f2f3720167f registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_volume 2026-03-24 03:53:23.268020 | orchestrator | 5640e2ec5e43 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-03-24 03:53:23.268027 | orchestrator | 1710f31e3a7a registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-03-24 03:53:23.268034 | orchestrator | 1a782d324e33 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) glance_api 2026-03-24 03:53:23.268040 | orchestrator | fd61ec4edaf1 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_console 2026-03-24 03:53:23.268047 | orchestrator | 1c641b161b09 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_apiserver 2026-03-24 03:53:23.268059 | orchestrator | 250ce4cfc464 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) horizon 2026-03-24 03:53:23.268065 | orchestrator | ed1576d787ef registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) nova_novncproxy 2026-03-24 03:53:23.268072 | orchestrator | afbf22b9fbf1 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_conductor 2026-03-24 03:53:23.268083 | orchestrator | 382c7da081a9 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_api 2026-03-24 03:53:23.268088 | orchestrator | 75498a7ca8e0 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_scheduler 2026-03-24 03:53:23.268095 | orchestrator | f045045d8fa2 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 46 minutes ago Up 46 minutes (healthy) neutron_server 2026-03-24 03:53:23.268100 | orchestrator | 09c6a9ded0c7 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 50 minutes ago Up 49 minutes (healthy) placement_api 2026-03-24 03:53:23.268107 | orchestrator | 47038bc3d3eb registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone 2026-03-24 03:53:23.268113 | orchestrator | c868d389aead registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_fernet 2026-03-24 03:53:23.268119 | orchestrator | 384244d5191a registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_ssh 2026-03-24 03:53:23.268126 | orchestrator | b71c6139c158 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 54 minutes ago Up 54 minutes ceph-mgr-testbed-node-2 2026-03-24 03:53:23.268132 | orchestrator | f2bbf2461a2e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-24 03:53:23.268139 | orchestrator | cce21668b5d2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-24 03:53:23.268145 | orchestrator | 30ca3def88c7 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-24 03:53:23.268152 | orchestrator | b741202df605 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-24 03:53:23.268162 | orchestrator | 9c68fe7808c0 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-24 03:53:23.268168 | orchestrator | 1a972720cf80 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-24 03:53:23.268175 | orchestrator | 228e066669a3 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-24 03:53:23.268181 | orchestrator | 785b14569627 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-24 03:53:23.268187 | orchestrator | 7d3ea26f1729 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-24 03:53:23.268197 | orchestrator | 90044f7c98bc registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-24 03:53:23.268213 | orchestrator | 89e2e47535f1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-24 03:53:23.268219 | orchestrator | 41baf64a4871 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-24 03:53:23.268226 | orchestrator | bf4328c5ebe5 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-24 03:53:23.268232 | orchestrator | d055f809499e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-24 03:53:23.268238 | orchestrator | 64a521d102e2 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-03-24 03:53:23.268245 | orchestrator | 1a815f1a6bed registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-03-24 03:53:23.268251 | orchestrator | 49618642872b registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-03-24 03:53:23.268257 | orchestrator | be55897c45af registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-03-24 03:53:23.268264 | orchestrator | 6b8ad95ed310 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-24 03:53:23.268270 | orchestrator | b19c0eb69e2d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-24 03:53:23.268277 | orchestrator | 28e70192d6da registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-24 03:53:23.527262 | orchestrator | 2026-03-24 03:53:23.527327 | orchestrator | ## Images @ testbed-node-2 2026-03-24 03:53:23.527332 | orchestrator | 2026-03-24 03:53:23.527337 | orchestrator | + echo 2026-03-24 03:53:23.527341 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-24 03:53:23.527346 | orchestrator | + echo 2026-03-24 03:53:23.527350 | orchestrator | + osism container testbed-node-2 images 2026-03-24 03:53:25.910562 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-24 03:53:25.910707 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-24 03:53:25.910745 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-24 03:53:25.910755 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-24 03:53:25.910761 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-24 03:53:25.910767 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-24 03:53:25.910773 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-24 03:53:25.910778 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-24 03:53:25.910806 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-24 03:53:25.910812 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-24 03:53:25.910817 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-24 03:53:25.910828 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-24 03:53:25.910833 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-24 03:53:25.910839 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-24 03:53:25.910845 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-24 03:53:25.910850 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-24 03:53:25.910856 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-24 03:53:25.910861 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-24 03:53:25.910867 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-24 03:53:25.910872 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-24 03:53:25.910878 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-24 03:53:25.910883 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-24 03:53:25.910889 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-24 03:53:25.910894 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-24 03:53:25.910899 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-24 03:53:25.910905 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-24 03:53:25.910911 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-24 03:53:25.910916 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-24 03:53:25.910921 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-24 03:53:25.910927 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-24 03:53:25.910932 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-24 03:53:25.910937 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-24 03:53:25.910959 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-24 03:53:25.910965 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-24 03:53:25.910970 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-24 03:53:25.910981 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-24 03:53:25.910986 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-24 03:53:25.910992 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-24 03:53:25.910997 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-24 03:53:25.911003 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-24 03:53:25.911008 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-24 03:53:25.911014 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-24 03:53:25.911019 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-24 03:53:25.911024 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-24 03:53:25.911037 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-24 03:53:25.911042 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-24 03:53:25.911048 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-24 03:53:25.911053 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-24 03:53:25.911058 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-24 03:53:25.911064 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-24 03:53:25.911069 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-24 03:53:25.911075 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-24 03:53:25.911080 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-24 03:53:25.911085 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-24 03:53:25.911091 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-24 03:53:25.911096 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-24 03:53:25.911101 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-24 03:53:25.911106 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-24 03:53:25.911112 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-24 03:53:25.911117 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-24 03:53:25.911122 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-24 03:53:25.911269 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-24 03:53:25.911279 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-24 03:53:25.911285 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-24 03:53:25.911290 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-24 03:53:25.911296 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-24 03:53:25.911301 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-24 03:53:25.911313 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-24 03:53:25.911323 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-24 03:53:25.911332 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-24 03:53:26.266794 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-24 03:53:26.273435 | orchestrator | + set -e 2026-03-24 03:53:26.273514 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 03:53:26.273524 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 03:53:26.273532 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 03:53:26.273539 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 03:53:26.273547 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 03:53:26.273554 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 03:53:26.273563 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 03:53:26.273570 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 03:53:26.273578 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 03:53:26.273585 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 03:53:26.273592 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 03:53:26.273599 | orchestrator | ++ export ARA=false 2026-03-24 03:53:26.273607 | orchestrator | ++ ARA=false 2026-03-24 03:53:26.273614 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 03:53:26.273621 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 03:53:26.273629 | orchestrator | ++ export TEMPEST=false 2026-03-24 03:53:26.273636 | orchestrator | ++ TEMPEST=false 2026-03-24 03:53:26.273643 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 03:53:26.273650 | orchestrator | ++ IS_ZUUL=true 2026-03-24 03:53:26.273658 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 03:53:26.273665 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 03:53:26.273672 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 03:53:26.273679 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 03:53:26.273686 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 03:53:26.273693 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 03:53:26.273702 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 03:53:26.273709 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 03:53:26.273716 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 03:53:26.273723 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 03:53:26.273731 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-24 03:53:26.273739 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-24 03:53:26.282474 | orchestrator | + set -e 2026-03-24 03:53:26.282573 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 03:53:26.282589 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 03:53:26.282603 | orchestrator | ++ INTERACTIVE=false 2026-03-24 03:53:26.282616 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 03:53:26.282630 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 03:53:26.282642 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-24 03:53:26.282813 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-24 03:53:26.286842 | orchestrator | 2026-03-24 03:53:26.286925 | orchestrator | # Ceph status 2026-03-24 03:53:26.286940 | orchestrator | 2026-03-24 03:53:26.286980 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 03:53:26.286994 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 03:53:26.287006 | orchestrator | + echo 2026-03-24 03:53:26.287017 | orchestrator | + echo '# Ceph status' 2026-03-24 03:53:26.287028 | orchestrator | + echo 2026-03-24 03:53:26.287039 | orchestrator | + ceph -s 2026-03-24 03:53:26.827218 | orchestrator | cluster: 2026-03-24 03:53:26.827292 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-24 03:53:26.827300 | orchestrator | health: HEALTH_OK 2026-03-24 03:53:26.827305 | orchestrator | 2026-03-24 03:53:26.827310 | orchestrator | services: 2026-03-24 03:53:26.827315 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 66m) 2026-03-24 03:53:26.827321 | orchestrator | mgr: testbed-node-0(active, since 54m), standbys: testbed-node-1, testbed-node-2 2026-03-24 03:53:26.827327 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-24 03:53:26.827332 | orchestrator | osd: 6 osds: 6 up (since 62m), 6 in (since 63m) 2026-03-24 03:53:26.827337 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-24 03:53:26.827341 | orchestrator | 2026-03-24 03:53:26.827346 | orchestrator | data: 2026-03-24 03:53:26.827351 | orchestrator | volumes: 1/1 healthy 2026-03-24 03:53:26.827355 | orchestrator | pools: 14 pools, 417 pgs 2026-03-24 03:53:26.827360 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-24 03:53:26.827364 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-03-24 03:53:26.827368 | orchestrator | pgs: 417 active+clean 2026-03-24 03:53:26.827373 | orchestrator | 2026-03-24 03:53:26.827377 | orchestrator | io: 2026-03-24 03:53:26.827382 | orchestrator | client: 87 KiB/s rd, 0 B/s wr, 87 op/s rd, 58 op/s wr 2026-03-24 03:53:26.827416 | orchestrator | 2026-03-24 03:53:26.871735 | orchestrator | 2026-03-24 03:53:26.871834 | orchestrator | # Ceph versions 2026-03-24 03:53:26.871852 | orchestrator | 2026-03-24 03:53:26.871865 | orchestrator | + echo 2026-03-24 03:53:26.871878 | orchestrator | + echo '# Ceph versions' 2026-03-24 03:53:26.871891 | orchestrator | + echo 2026-03-24 03:53:26.871905 | orchestrator | + ceph versions 2026-03-24 03:53:27.465400 | orchestrator | { 2026-03-24 03:53:27.465482 | orchestrator | "mon": { 2026-03-24 03:53:27.465490 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-24 03:53:27.465495 | orchestrator | }, 2026-03-24 03:53:27.465500 | orchestrator | "mgr": { 2026-03-24 03:53:27.465505 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-24 03:53:27.465511 | orchestrator | }, 2026-03-24 03:53:27.465517 | orchestrator | "osd": { 2026-03-24 03:53:27.465523 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-24 03:53:27.465529 | orchestrator | }, 2026-03-24 03:53:27.465535 | orchestrator | "mds": { 2026-03-24 03:53:27.465541 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-24 03:53:27.465548 | orchestrator | }, 2026-03-24 03:53:27.465553 | orchestrator | "rgw": { 2026-03-24 03:53:27.465559 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-24 03:53:27.465565 | orchestrator | }, 2026-03-24 03:53:27.465571 | orchestrator | "overall": { 2026-03-24 03:53:27.465577 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-24 03:53:27.465584 | orchestrator | } 2026-03-24 03:53:27.465589 | orchestrator | } 2026-03-24 03:53:27.513541 | orchestrator | 2026-03-24 03:53:27.513604 | orchestrator | # Ceph OSD tree 2026-03-24 03:53:27.513610 | orchestrator | 2026-03-24 03:53:27.513614 | orchestrator | + echo 2026-03-24 03:53:27.513619 | orchestrator | + echo '# Ceph OSD tree' 2026-03-24 03:53:27.513624 | orchestrator | + echo 2026-03-24 03:53:27.513627 | orchestrator | + ceph osd df tree 2026-03-24 03:53:28.014351 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-24 03:53:28.014491 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 394 MiB 113 GiB 5.89 1.00 - root default 2026-03-24 03:53:28.014503 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-03-24 03:53:28.014511 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.20 1.22 210 up osd.0 2026-03-24 03:53:28.014520 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 928 MiB 867 MiB 1 KiB 62 MiB 19 GiB 4.54 0.77 196 up osd.5 2026-03-24 03:53:28.014551 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.01 - host testbed-node-4 2026-03-24 03:53:28.014560 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 66 MiB 19 GiB 6.82 1.16 199 up osd.2 2026-03-24 03:53:28.014567 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 947 MiB 1 KiB 78 MiB 19 GiB 5.01 0.85 209 up osd.3 2026-03-24 03:53:28.014575 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-03-24 03:53:28.014583 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.93 1.01 201 up osd.1 2026-03-24 03:53:28.014590 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.83 0.99 203 up osd.4 2026-03-24 03:53:28.014597 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 394 MiB 113 GiB 5.89 2026-03-24 03:53:28.014605 | orchestrator | MIN/MAX VAR: 0.77/1.22 STDDEV: 0.93 2026-03-24 03:53:28.067899 | orchestrator | 2026-03-24 03:53:28.067967 | orchestrator | # Ceph monitor status 2026-03-24 03:53:28.067975 | orchestrator | 2026-03-24 03:53:28.067980 | orchestrator | + echo 2026-03-24 03:53:28.067986 | orchestrator | + echo '# Ceph monitor status' 2026-03-24 03:53:28.067991 | orchestrator | + echo 2026-03-24 03:53:28.067996 | orchestrator | + ceph mon stat 2026-03-24 03:53:28.623874 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-24 03:53:28.668322 | orchestrator | 2026-03-24 03:53:28.668430 | orchestrator | # Ceph quorum status 2026-03-24 03:53:28.668439 | orchestrator | 2026-03-24 03:53:28.668445 | orchestrator | + echo 2026-03-24 03:53:28.668450 | orchestrator | + echo '# Ceph quorum status' 2026-03-24 03:53:28.668455 | orchestrator | + echo 2026-03-24 03:53:28.668687 | orchestrator | + ceph quorum_status 2026-03-24 03:53:28.669118 | orchestrator | + jq 2026-03-24 03:53:29.285548 | orchestrator | { 2026-03-24 03:53:29.285661 | orchestrator | "election_epoch": 8, 2026-03-24 03:53:29.285676 | orchestrator | "quorum": [ 2026-03-24 03:53:29.285685 | orchestrator | 0, 2026-03-24 03:53:29.285694 | orchestrator | 1, 2026-03-24 03:53:29.285703 | orchestrator | 2 2026-03-24 03:53:29.285712 | orchestrator | ], 2026-03-24 03:53:29.285720 | orchestrator | "quorum_names": [ 2026-03-24 03:53:29.285729 | orchestrator | "testbed-node-0", 2026-03-24 03:53:29.285738 | orchestrator | "testbed-node-1", 2026-03-24 03:53:29.285746 | orchestrator | "testbed-node-2" 2026-03-24 03:53:29.285755 | orchestrator | ], 2026-03-24 03:53:29.285764 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-24 03:53:29.285773 | orchestrator | "quorum_age": 4005, 2026-03-24 03:53:29.285782 | orchestrator | "features": { 2026-03-24 03:53:29.285790 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-24 03:53:29.285799 | orchestrator | "quorum_mon": [ 2026-03-24 03:53:29.285807 | orchestrator | "kraken", 2026-03-24 03:53:29.285816 | orchestrator | "luminous", 2026-03-24 03:53:29.285837 | orchestrator | "mimic", 2026-03-24 03:53:29.285846 | orchestrator | "osdmap-prune", 2026-03-24 03:53:29.285854 | orchestrator | "nautilus", 2026-03-24 03:53:29.285863 | orchestrator | "octopus", 2026-03-24 03:53:29.285871 | orchestrator | "pacific", 2026-03-24 03:53:29.285894 | orchestrator | "elector-pinging", 2026-03-24 03:53:29.286108 | orchestrator | "quincy", 2026-03-24 03:53:29.286131 | orchestrator | "reef" 2026-03-24 03:53:29.286156 | orchestrator | ] 2026-03-24 03:53:29.286171 | orchestrator | }, 2026-03-24 03:53:29.286186 | orchestrator | "monmap": { 2026-03-24 03:53:29.286201 | orchestrator | "epoch": 1, 2026-03-24 03:53:29.286215 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-24 03:53:29.286232 | orchestrator | "modified": "2026-03-24T02:46:27.177219Z", 2026-03-24 03:53:29.286246 | orchestrator | "created": "2026-03-24T02:46:27.177219Z", 2026-03-24 03:53:29.286261 | orchestrator | "min_mon_release": 18, 2026-03-24 03:53:29.286276 | orchestrator | "min_mon_release_name": "reef", 2026-03-24 03:53:29.286291 | orchestrator | "election_strategy": 1, 2026-03-24 03:53:29.286304 | orchestrator | "disallowed_leaders: ": "", 2026-03-24 03:53:29.286344 | orchestrator | "stretch_mode": false, 2026-03-24 03:53:29.286361 | orchestrator | "tiebreaker_mon": "", 2026-03-24 03:53:29.286375 | orchestrator | "removed_ranks: ": "", 2026-03-24 03:53:29.286418 | orchestrator | "features": { 2026-03-24 03:53:29.286432 | orchestrator | "persistent": [ 2026-03-24 03:53:29.286445 | orchestrator | "kraken", 2026-03-24 03:53:29.286474 | orchestrator | "luminous", 2026-03-24 03:53:29.286509 | orchestrator | "mimic", 2026-03-24 03:53:29.286534 | orchestrator | "osdmap-prune", 2026-03-24 03:53:29.286550 | orchestrator | "nautilus", 2026-03-24 03:53:29.286564 | orchestrator | "octopus", 2026-03-24 03:53:29.286578 | orchestrator | "pacific", 2026-03-24 03:53:29.286593 | orchestrator | "elector-pinging", 2026-03-24 03:53:29.286607 | orchestrator | "quincy", 2026-03-24 03:53:29.286622 | orchestrator | "reef" 2026-03-24 03:53:29.286636 | orchestrator | ], 2026-03-24 03:53:29.286652 | orchestrator | "optional": [] 2026-03-24 03:53:29.286667 | orchestrator | }, 2026-03-24 03:53:29.286681 | orchestrator | "mons": [ 2026-03-24 03:53:29.286696 | orchestrator | { 2026-03-24 03:53:29.286712 | orchestrator | "rank": 0, 2026-03-24 03:53:29.286726 | orchestrator | "name": "testbed-node-0", 2026-03-24 03:53:29.286742 | orchestrator | "public_addrs": { 2026-03-24 03:53:29.286753 | orchestrator | "addrvec": [ 2026-03-24 03:53:29.286763 | orchestrator | { 2026-03-24 03:53:29.286773 | orchestrator | "type": "v2", 2026-03-24 03:53:29.286783 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-24 03:53:29.286793 | orchestrator | "nonce": 0 2026-03-24 03:53:29.286803 | orchestrator | }, 2026-03-24 03:53:29.286813 | orchestrator | { 2026-03-24 03:53:29.286822 | orchestrator | "type": "v1", 2026-03-24 03:53:29.286832 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-24 03:53:29.286843 | orchestrator | "nonce": 0 2026-03-24 03:53:29.286853 | orchestrator | } 2026-03-24 03:53:29.286864 | orchestrator | ] 2026-03-24 03:53:29.286873 | orchestrator | }, 2026-03-24 03:53:29.286883 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-24 03:53:29.286894 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-24 03:53:29.286904 | orchestrator | "priority": 0, 2026-03-24 03:53:29.286914 | orchestrator | "weight": 0, 2026-03-24 03:53:29.286923 | orchestrator | "crush_location": "{}" 2026-03-24 03:53:29.286931 | orchestrator | }, 2026-03-24 03:53:29.286940 | orchestrator | { 2026-03-24 03:53:29.286953 | orchestrator | "rank": 1, 2026-03-24 03:53:29.286967 | orchestrator | "name": "testbed-node-1", 2026-03-24 03:53:29.286986 | orchestrator | "public_addrs": { 2026-03-24 03:53:29.287027 | orchestrator | "addrvec": [ 2026-03-24 03:53:29.287042 | orchestrator | { 2026-03-24 03:53:29.287055 | orchestrator | "type": "v2", 2026-03-24 03:53:29.287069 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-24 03:53:29.287083 | orchestrator | "nonce": 0 2026-03-24 03:53:29.287097 | orchestrator | }, 2026-03-24 03:53:29.287111 | orchestrator | { 2026-03-24 03:53:29.287125 | orchestrator | "type": "v1", 2026-03-24 03:53:29.287140 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-24 03:53:29.287154 | orchestrator | "nonce": 0 2026-03-24 03:53:29.287170 | orchestrator | } 2026-03-24 03:53:29.287184 | orchestrator | ] 2026-03-24 03:53:29.287197 | orchestrator | }, 2026-03-24 03:53:29.287212 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-24 03:53:29.287226 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-24 03:53:29.287240 | orchestrator | "priority": 0, 2026-03-24 03:53:29.287254 | orchestrator | "weight": 0, 2026-03-24 03:53:29.287268 | orchestrator | "crush_location": "{}" 2026-03-24 03:53:29.287283 | orchestrator | }, 2026-03-24 03:53:29.287298 | orchestrator | { 2026-03-24 03:53:29.287312 | orchestrator | "rank": 2, 2026-03-24 03:53:29.287326 | orchestrator | "name": "testbed-node-2", 2026-03-24 03:53:29.287340 | orchestrator | "public_addrs": { 2026-03-24 03:53:29.287353 | orchestrator | "addrvec": [ 2026-03-24 03:53:29.287367 | orchestrator | { 2026-03-24 03:53:29.287404 | orchestrator | "type": "v2", 2026-03-24 03:53:29.287421 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-24 03:53:29.287436 | orchestrator | "nonce": 0 2026-03-24 03:53:29.287449 | orchestrator | }, 2026-03-24 03:53:29.287464 | orchestrator | { 2026-03-24 03:53:29.287478 | orchestrator | "type": "v1", 2026-03-24 03:53:29.287492 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-24 03:53:29.287524 | orchestrator | "nonce": 0 2026-03-24 03:53:29.287539 | orchestrator | } 2026-03-24 03:53:29.287552 | orchestrator | ] 2026-03-24 03:53:29.287564 | orchestrator | }, 2026-03-24 03:53:29.287577 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-24 03:53:29.287589 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-24 03:53:29.287601 | orchestrator | "priority": 0, 2026-03-24 03:53:29.287613 | orchestrator | "weight": 0, 2026-03-24 03:53:29.287633 | orchestrator | "crush_location": "{}" 2026-03-24 03:53:29.287645 | orchestrator | } 2026-03-24 03:53:29.287657 | orchestrator | ] 2026-03-24 03:53:29.287670 | orchestrator | } 2026-03-24 03:53:29.287682 | orchestrator | } 2026-03-24 03:53:29.287715 | orchestrator | 2026-03-24 03:53:29.287731 | orchestrator | # Ceph free space status 2026-03-24 03:53:29.287745 | orchestrator | 2026-03-24 03:53:29.287759 | orchestrator | + echo 2026-03-24 03:53:29.287774 | orchestrator | + echo '# Ceph free space status' 2026-03-24 03:53:29.287788 | orchestrator | + echo 2026-03-24 03:53:29.287802 | orchestrator | + ceph df 2026-03-24 03:53:29.837942 | orchestrator | --- RAW STORAGE --- 2026-03-24 03:53:29.838134 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-24 03:53:29.838166 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-03-24 03:53:29.838178 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-03-24 03:53:29.838190 | orchestrator | 2026-03-24 03:53:29.838202 | orchestrator | --- POOLS --- 2026-03-24 03:53:29.838214 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-24 03:53:29.838226 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-03-24 03:53:29.838237 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-24 03:53:29.838248 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-24 03:53:29.838260 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-24 03:53:29.838270 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-24 03:53:29.838281 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-24 03:53:29.838292 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-24 03:53:29.838303 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-24 03:53:29.838313 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-03-24 03:53:29.838324 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-24 03:53:29.838335 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-24 03:53:29.838346 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-03-24 03:53:29.838356 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-24 03:53:29.838367 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-24 03:53:29.887193 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-24 03:53:29.948688 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-24 03:53:29.948789 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-24 03:53:29.948810 | orchestrator | + osism apply facts 2026-03-24 03:53:41.959780 | orchestrator | 2026-03-24 03:53:41 | INFO  | Task 13d18884-df53-464b-91d1-de2929c48190 (facts) was prepared for execution. 2026-03-24 03:53:41.959898 | orchestrator | 2026-03-24 03:53:41 | INFO  | It takes a moment until task 13d18884-df53-464b-91d1-de2929c48190 (facts) has been started and output is visible here. 2026-03-24 03:53:56.593723 | orchestrator | 2026-03-24 03:53:56.593915 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-24 03:53:56.593946 | orchestrator | 2026-03-24 03:53:56.593966 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-24 03:53:56.593986 | orchestrator | Tuesday 24 March 2026 03:53:45 +0000 (0:00:00.201) 0:00:00.201 ********* 2026-03-24 03:53:56.594004 | orchestrator | ok: [testbed-manager] 2026-03-24 03:53:56.594110 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:53:56.594132 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:53:56.594194 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:53:56.594244 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:53:56.594257 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:53:56.594270 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:53:56.594284 | orchestrator | 2026-03-24 03:53:56.594298 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-24 03:53:56.594310 | orchestrator | Tuesday 24 March 2026 03:53:46 +0000 (0:00:00.942) 0:00:01.144 ********* 2026-03-24 03:53:56.594323 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:53:56.594336 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:53:56.594349 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:53:56.594361 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:53:56.594417 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:53:56.594430 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:53:56.594442 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:53:56.594455 | orchestrator | 2026-03-24 03:53:56.594467 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-24 03:53:56.594480 | orchestrator | 2026-03-24 03:53:56.594493 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 03:53:56.594506 | orchestrator | Tuesday 24 March 2026 03:53:47 +0000 (0:00:01.131) 0:00:02.275 ********* 2026-03-24 03:53:56.594518 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:53:56.594531 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:53:56.594543 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:53:56.594555 | orchestrator | ok: [testbed-manager] 2026-03-24 03:53:56.594567 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:53:56.594579 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:53:56.594589 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:53:56.594601 | orchestrator | 2026-03-24 03:53:56.594612 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-24 03:53:56.594622 | orchestrator | 2026-03-24 03:53:56.594633 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-24 03:53:56.594644 | orchestrator | Tuesday 24 March 2026 03:53:55 +0000 (0:00:07.546) 0:00:09.821 ********* 2026-03-24 03:53:56.594655 | orchestrator | skipping: [testbed-manager] 2026-03-24 03:53:56.594666 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:53:56.594676 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:53:56.594687 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:53:56.594697 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:53:56.594708 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:53:56.594719 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:53:56.594729 | orchestrator | 2026-03-24 03:53:56.594740 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:53:56.594752 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:53:56.594796 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:53:56.594823 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:53:56.594841 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:53:56.594859 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:53:56.594879 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:53:56.594899 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:53:56.594920 | orchestrator | 2026-03-24 03:53:56.594940 | orchestrator | 2026-03-24 03:53:56.594953 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:53:56.594974 | orchestrator | Tuesday 24 March 2026 03:53:56 +0000 (0:00:00.723) 0:00:10.545 ********* 2026-03-24 03:53:56.594985 | orchestrator | =============================================================================== 2026-03-24 03:53:56.594996 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.55s 2026-03-24 03:53:56.595006 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2026-03-24 03:53:56.595017 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.94s 2026-03-24 03:53:56.595028 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2026-03-24 03:53:56.946681 | orchestrator | + osism validate ceph-mons 2026-03-24 03:54:28.481697 | orchestrator | 2026-03-24 03:54:28.481831 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-24 03:54:28.481860 | orchestrator | 2026-03-24 03:54:28.481877 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-24 03:54:28.481892 | orchestrator | Tuesday 24 March 2026 03:54:13 +0000 (0:00:00.441) 0:00:00.441 ********* 2026-03-24 03:54:28.481907 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:54:28.481923 | orchestrator | 2026-03-24 03:54:28.481940 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-24 03:54:28.481953 | orchestrator | Tuesday 24 March 2026 03:54:14 +0000 (0:00:00.827) 0:00:01.268 ********* 2026-03-24 03:54:28.481970 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:54:28.481986 | orchestrator | 2026-03-24 03:54:28.482003 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-24 03:54:28.482097 | orchestrator | Tuesday 24 March 2026 03:54:15 +0000 (0:00:00.968) 0:00:02.237 ********* 2026-03-24 03:54:28.482115 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.482132 | orchestrator | 2026-03-24 03:54:28.482148 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-24 03:54:28.482166 | orchestrator | Tuesday 24 March 2026 03:54:15 +0000 (0:00:00.117) 0:00:02.354 ********* 2026-03-24 03:54:28.482183 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.482201 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:54:28.482217 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:54:28.482228 | orchestrator | 2026-03-24 03:54:28.482239 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-24 03:54:28.482251 | orchestrator | Tuesday 24 March 2026 03:54:15 +0000 (0:00:00.303) 0:00:02.658 ********* 2026-03-24 03:54:28.482262 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.482275 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:54:28.482286 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:54:28.482296 | orchestrator | 2026-03-24 03:54:28.482308 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-24 03:54:28.482319 | orchestrator | Tuesday 24 March 2026 03:54:16 +0000 (0:00:01.004) 0:00:03.662 ********* 2026-03-24 03:54:28.482331 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.482342 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:54:28.482382 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:54:28.482393 | orchestrator | 2026-03-24 03:54:28.482403 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-24 03:54:28.482413 | orchestrator | Tuesday 24 March 2026 03:54:16 +0000 (0:00:00.278) 0:00:03.941 ********* 2026-03-24 03:54:28.482422 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.482432 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:54:28.482441 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:54:28.482451 | orchestrator | 2026-03-24 03:54:28.482468 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-24 03:54:28.482489 | orchestrator | Tuesday 24 March 2026 03:54:17 +0000 (0:00:00.468) 0:00:04.409 ********* 2026-03-24 03:54:28.482511 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.482528 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:54:28.482544 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:54:28.482592 | orchestrator | 2026-03-24 03:54:28.482609 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-24 03:54:28.482625 | orchestrator | Tuesday 24 March 2026 03:54:17 +0000 (0:00:00.289) 0:00:04.699 ********* 2026-03-24 03:54:28.482642 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.482659 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:54:28.482673 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:54:28.482689 | orchestrator | 2026-03-24 03:54:28.482707 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-24 03:54:28.482725 | orchestrator | Tuesday 24 March 2026 03:54:17 +0000 (0:00:00.286) 0:00:04.986 ********* 2026-03-24 03:54:28.482743 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.482757 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:54:28.482775 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:54:28.482790 | orchestrator | 2026-03-24 03:54:28.482807 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-24 03:54:28.482824 | orchestrator | Tuesday 24 March 2026 03:54:18 +0000 (0:00:00.483) 0:00:05.470 ********* 2026-03-24 03:54:28.482840 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.482857 | orchestrator | 2026-03-24 03:54:28.482875 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-24 03:54:28.482968 | orchestrator | Tuesday 24 March 2026 03:54:18 +0000 (0:00:00.242) 0:00:05.712 ********* 2026-03-24 03:54:28.482996 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.483012 | orchestrator | 2026-03-24 03:54:28.483028 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-24 03:54:28.483045 | orchestrator | Tuesday 24 March 2026 03:54:18 +0000 (0:00:00.249) 0:00:05.962 ********* 2026-03-24 03:54:28.483061 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.483078 | orchestrator | 2026-03-24 03:54:28.483095 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:54:28.483114 | orchestrator | Tuesday 24 March 2026 03:54:19 +0000 (0:00:00.231) 0:00:06.193 ********* 2026-03-24 03:54:28.483132 | orchestrator | 2026-03-24 03:54:28.483150 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:54:28.483167 | orchestrator | Tuesday 24 March 2026 03:54:19 +0000 (0:00:00.068) 0:00:06.261 ********* 2026-03-24 03:54:28.483182 | orchestrator | 2026-03-24 03:54:28.483193 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:54:28.483206 | orchestrator | Tuesday 24 March 2026 03:54:19 +0000 (0:00:00.068) 0:00:06.330 ********* 2026-03-24 03:54:28.483222 | orchestrator | 2026-03-24 03:54:28.483238 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-24 03:54:28.483254 | orchestrator | Tuesday 24 March 2026 03:54:19 +0000 (0:00:00.090) 0:00:06.421 ********* 2026-03-24 03:54:28.483270 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.483287 | orchestrator | 2026-03-24 03:54:28.483304 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-24 03:54:28.483322 | orchestrator | Tuesday 24 March 2026 03:54:19 +0000 (0:00:00.254) 0:00:06.675 ********* 2026-03-24 03:54:28.483333 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.483343 | orchestrator | 2026-03-24 03:54:28.483406 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-24 03:54:28.483418 | orchestrator | Tuesday 24 March 2026 03:54:19 +0000 (0:00:00.231) 0:00:06.907 ********* 2026-03-24 03:54:28.483428 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.483438 | orchestrator | 2026-03-24 03:54:28.483453 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-24 03:54:28.483463 | orchestrator | Tuesday 24 March 2026 03:54:19 +0000 (0:00:00.139) 0:00:07.046 ********* 2026-03-24 03:54:28.483472 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:54:28.483487 | orchestrator | 2026-03-24 03:54:28.483504 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-24 03:54:28.483514 | orchestrator | Tuesday 24 March 2026 03:54:21 +0000 (0:00:01.655) 0:00:08.702 ********* 2026-03-24 03:54:28.483540 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.483550 | orchestrator | 2026-03-24 03:54:28.483560 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-24 03:54:28.483587 | orchestrator | Tuesday 24 March 2026 03:54:22 +0000 (0:00:00.466) 0:00:09.168 ********* 2026-03-24 03:54:28.483597 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.483607 | orchestrator | 2026-03-24 03:54:28.483616 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-24 03:54:28.483626 | orchestrator | Tuesday 24 March 2026 03:54:22 +0000 (0:00:00.155) 0:00:09.324 ********* 2026-03-24 03:54:28.483636 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.483645 | orchestrator | 2026-03-24 03:54:28.483656 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-24 03:54:28.483672 | orchestrator | Tuesday 24 March 2026 03:54:22 +0000 (0:00:00.314) 0:00:09.639 ********* 2026-03-24 03:54:28.483697 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.483714 | orchestrator | 2026-03-24 03:54:28.483730 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-24 03:54:28.483745 | orchestrator | Tuesday 24 March 2026 03:54:22 +0000 (0:00:00.302) 0:00:09.941 ********* 2026-03-24 03:54:28.483761 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.483775 | orchestrator | 2026-03-24 03:54:28.483791 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-24 03:54:28.483807 | orchestrator | Tuesday 24 March 2026 03:54:23 +0000 (0:00:00.130) 0:00:10.072 ********* 2026-03-24 03:54:28.483823 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.483840 | orchestrator | 2026-03-24 03:54:28.483855 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-24 03:54:28.483872 | orchestrator | Tuesday 24 March 2026 03:54:23 +0000 (0:00:00.135) 0:00:10.208 ********* 2026-03-24 03:54:28.483889 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.483904 | orchestrator | 2026-03-24 03:54:28.483921 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-24 03:54:28.483931 | orchestrator | Tuesday 24 March 2026 03:54:23 +0000 (0:00:00.121) 0:00:10.330 ********* 2026-03-24 03:54:28.483941 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:54:28.483951 | orchestrator | 2026-03-24 03:54:28.483961 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-24 03:54:28.483970 | orchestrator | Tuesday 24 March 2026 03:54:24 +0000 (0:00:01.305) 0:00:11.635 ********* 2026-03-24 03:54:28.483980 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.483989 | orchestrator | 2026-03-24 03:54:28.484002 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-24 03:54:28.484017 | orchestrator | Tuesday 24 March 2026 03:54:24 +0000 (0:00:00.291) 0:00:11.927 ********* 2026-03-24 03:54:28.484033 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.484047 | orchestrator | 2026-03-24 03:54:28.484071 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-24 03:54:28.484086 | orchestrator | Tuesday 24 March 2026 03:54:25 +0000 (0:00:00.146) 0:00:12.073 ********* 2026-03-24 03:54:28.484100 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:54:28.484114 | orchestrator | 2026-03-24 03:54:28.484130 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-24 03:54:28.484153 | orchestrator | Tuesday 24 March 2026 03:54:25 +0000 (0:00:00.132) 0:00:12.205 ********* 2026-03-24 03:54:28.484168 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.484183 | orchestrator | 2026-03-24 03:54:28.484198 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-24 03:54:28.484212 | orchestrator | Tuesday 24 March 2026 03:54:25 +0000 (0:00:00.132) 0:00:12.338 ********* 2026-03-24 03:54:28.484227 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.484243 | orchestrator | 2026-03-24 03:54:28.484258 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-24 03:54:28.484274 | orchestrator | Tuesday 24 March 2026 03:54:25 +0000 (0:00:00.314) 0:00:12.652 ********* 2026-03-24 03:54:28.484303 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:54:28.484320 | orchestrator | 2026-03-24 03:54:28.484335 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-24 03:54:28.484351 | orchestrator | Tuesday 24 March 2026 03:54:25 +0000 (0:00:00.254) 0:00:12.907 ********* 2026-03-24 03:54:28.484401 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:54:28.484417 | orchestrator | 2026-03-24 03:54:28.484433 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-24 03:54:28.484447 | orchestrator | Tuesday 24 March 2026 03:54:26 +0000 (0:00:00.240) 0:00:13.147 ********* 2026-03-24 03:54:28.484463 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:54:28.484480 | orchestrator | 2026-03-24 03:54:28.484496 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-24 03:54:28.484512 | orchestrator | Tuesday 24 March 2026 03:54:27 +0000 (0:00:01.679) 0:00:14.827 ********* 2026-03-24 03:54:28.484529 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:54:28.484545 | orchestrator | 2026-03-24 03:54:28.484560 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-24 03:54:28.484570 | orchestrator | Tuesday 24 March 2026 03:54:28 +0000 (0:00:00.249) 0:00:15.076 ********* 2026-03-24 03:54:28.484587 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:54:28.484601 | orchestrator | 2026-03-24 03:54:28.484637 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:54:30.947702 | orchestrator | Tuesday 24 March 2026 03:54:28 +0000 (0:00:00.243) 0:00:15.320 ********* 2026-03-24 03:54:30.947800 | orchestrator | 2026-03-24 03:54:30.947815 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:54:30.947825 | orchestrator | Tuesday 24 March 2026 03:54:28 +0000 (0:00:00.064) 0:00:15.385 ********* 2026-03-24 03:54:30.947834 | orchestrator | 2026-03-24 03:54:30.947844 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:54:30.947854 | orchestrator | Tuesday 24 March 2026 03:54:28 +0000 (0:00:00.064) 0:00:15.449 ********* 2026-03-24 03:54:30.947864 | orchestrator | 2026-03-24 03:54:30.947874 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-24 03:54:30.947884 | orchestrator | Tuesday 24 March 2026 03:54:28 +0000 (0:00:00.067) 0:00:15.517 ********* 2026-03-24 03:54:30.947895 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:54:30.947905 | orchestrator | 2026-03-24 03:54:30.947915 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-24 03:54:30.947925 | orchestrator | Tuesday 24 March 2026 03:54:29 +0000 (0:00:01.398) 0:00:16.915 ********* 2026-03-24 03:54:30.947935 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-24 03:54:30.947945 | orchestrator |  "msg": [ 2026-03-24 03:54:30.947957 | orchestrator |  "Validator run completed.", 2026-03-24 03:54:30.947968 | orchestrator |  "You can find the report file here:", 2026-03-24 03:54:30.947979 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-24T03:54:14+00:00-report.json", 2026-03-24 03:54:30.947988 | orchestrator |  "on the following host:", 2026-03-24 03:54:30.947995 | orchestrator |  "testbed-manager" 2026-03-24 03:54:30.948001 | orchestrator |  ] 2026-03-24 03:54:30.948008 | orchestrator | } 2026-03-24 03:54:30.948014 | orchestrator | 2026-03-24 03:54:30.948020 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:54:30.948028 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-24 03:54:30.948036 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:54:30.948043 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:54:30.948071 | orchestrator | 2026-03-24 03:54:30.948078 | orchestrator | 2026-03-24 03:54:30.948084 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:54:30.948090 | orchestrator | Tuesday 24 March 2026 03:54:30 +0000 (0:00:00.785) 0:00:17.701 ********* 2026-03-24 03:54:30.948096 | orchestrator | =============================================================================== 2026-03-24 03:54:30.948102 | orchestrator | Aggregate test results step one ----------------------------------------- 1.68s 2026-03-24 03:54:30.948108 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.66s 2026-03-24 03:54:30.948114 | orchestrator | Write report file ------------------------------------------------------- 1.40s 2026-03-24 03:54:30.948120 | orchestrator | Gather status data ------------------------------------------------------ 1.31s 2026-03-24 03:54:30.948126 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2026-03-24 03:54:30.948132 | orchestrator | Create report output directory ------------------------------------------ 0.97s 2026-03-24 03:54:30.948138 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-03-24 03:54:30.948154 | orchestrator | Print report file information ------------------------------------------- 0.79s 2026-03-24 03:54:30.948164 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.48s 2026-03-24 03:54:30.948174 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2026-03-24 03:54:30.948184 | orchestrator | Set quorum test data ---------------------------------------------------- 0.47s 2026-03-24 03:54:30.948194 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2026-03-24 03:54:30.948203 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.31s 2026-03-24 03:54:30.948212 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-03-24 03:54:30.948222 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2026-03-24 03:54:30.948232 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2026-03-24 03:54:30.948242 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-03-24 03:54:30.948251 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-03-24 03:54:30.948260 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-03-24 03:54:30.948269 | orchestrator | Print report file information ------------------------------------------- 0.25s 2026-03-24 03:54:31.222584 | orchestrator | + osism validate ceph-mgrs 2026-03-24 03:55:01.677173 | orchestrator | 2026-03-24 03:55:01.677282 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-24 03:55:01.677293 | orchestrator | 2026-03-24 03:55:01.677300 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-24 03:55:01.677306 | orchestrator | Tuesday 24 March 2026 03:54:47 +0000 (0:00:00.452) 0:00:00.452 ********* 2026-03-24 03:55:01.677314 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:01.677321 | orchestrator | 2026-03-24 03:55:01.677327 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-24 03:55:01.677334 | orchestrator | Tuesday 24 March 2026 03:54:48 +0000 (0:00:00.859) 0:00:01.311 ********* 2026-03-24 03:55:01.677387 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:01.677393 | orchestrator | 2026-03-24 03:55:01.677400 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-24 03:55:01.677407 | orchestrator | Tuesday 24 March 2026 03:54:49 +0000 (0:00:00.932) 0:00:02.243 ********* 2026-03-24 03:55:01.677412 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.677420 | orchestrator | 2026-03-24 03:55:01.677426 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-24 03:55:01.677433 | orchestrator | Tuesday 24 March 2026 03:54:49 +0000 (0:00:00.128) 0:00:02.372 ********* 2026-03-24 03:55:01.677463 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.677471 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:55:01.677478 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:55:01.677484 | orchestrator | 2026-03-24 03:55:01.677491 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-24 03:55:01.677498 | orchestrator | Tuesday 24 March 2026 03:54:49 +0000 (0:00:00.272) 0:00:02.644 ********* 2026-03-24 03:55:01.677505 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:55:01.677512 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.677518 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:55:01.677525 | orchestrator | 2026-03-24 03:55:01.677532 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-24 03:55:01.677539 | orchestrator | Tuesday 24 March 2026 03:54:51 +0000 (0:00:01.096) 0:00:03.741 ********* 2026-03-24 03:55:01.677546 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.677553 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:55:01.677560 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:55:01.677567 | orchestrator | 2026-03-24 03:55:01.677574 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-24 03:55:01.677580 | orchestrator | Tuesday 24 March 2026 03:54:51 +0000 (0:00:00.282) 0:00:04.024 ********* 2026-03-24 03:55:01.677587 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.677594 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:55:01.677601 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:55:01.677608 | orchestrator | 2026-03-24 03:55:01.677615 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-24 03:55:01.677621 | orchestrator | Tuesday 24 March 2026 03:54:51 +0000 (0:00:00.447) 0:00:04.471 ********* 2026-03-24 03:55:01.677628 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.677635 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:55:01.677642 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:55:01.677649 | orchestrator | 2026-03-24 03:55:01.677656 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-24 03:55:01.677662 | orchestrator | Tuesday 24 March 2026 03:54:52 +0000 (0:00:00.308) 0:00:04.780 ********* 2026-03-24 03:55:01.677669 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.677676 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:55:01.677683 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:55:01.677690 | orchestrator | 2026-03-24 03:55:01.677697 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-24 03:55:01.677703 | orchestrator | Tuesday 24 March 2026 03:54:52 +0000 (0:00:00.283) 0:00:05.063 ********* 2026-03-24 03:55:01.677710 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.677717 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:55:01.677724 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:55:01.677743 | orchestrator | 2026-03-24 03:55:01.677758 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-24 03:55:01.677843 | orchestrator | Tuesday 24 March 2026 03:54:52 +0000 (0:00:00.434) 0:00:05.498 ********* 2026-03-24 03:55:01.677850 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.677856 | orchestrator | 2026-03-24 03:55:01.677863 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-24 03:55:01.677869 | orchestrator | Tuesday 24 March 2026 03:54:53 +0000 (0:00:00.243) 0:00:05.741 ********* 2026-03-24 03:55:01.677875 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.677881 | orchestrator | 2026-03-24 03:55:01.677886 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-24 03:55:01.677893 | orchestrator | Tuesday 24 March 2026 03:54:53 +0000 (0:00:00.242) 0:00:05.984 ********* 2026-03-24 03:55:01.677899 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.677905 | orchestrator | 2026-03-24 03:55:01.677911 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:01.677917 | orchestrator | Tuesday 24 March 2026 03:54:53 +0000 (0:00:00.243) 0:00:06.227 ********* 2026-03-24 03:55:01.677923 | orchestrator | 2026-03-24 03:55:01.677929 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:01.677944 | orchestrator | Tuesday 24 March 2026 03:54:53 +0000 (0:00:00.068) 0:00:06.296 ********* 2026-03-24 03:55:01.677950 | orchestrator | 2026-03-24 03:55:01.677956 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:01.677962 | orchestrator | Tuesday 24 March 2026 03:54:53 +0000 (0:00:00.070) 0:00:06.366 ********* 2026-03-24 03:55:01.677968 | orchestrator | 2026-03-24 03:55:01.677975 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-24 03:55:01.677981 | orchestrator | Tuesday 24 March 2026 03:54:53 +0000 (0:00:00.071) 0:00:06.437 ********* 2026-03-24 03:55:01.677986 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.677992 | orchestrator | 2026-03-24 03:55:01.677998 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-24 03:55:01.678006 | orchestrator | Tuesday 24 March 2026 03:54:53 +0000 (0:00:00.248) 0:00:06.686 ********* 2026-03-24 03:55:01.678012 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.678070 | orchestrator | 2026-03-24 03:55:01.678095 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-24 03:55:01.678103 | orchestrator | Tuesday 24 March 2026 03:54:54 +0000 (0:00:00.239) 0:00:06.925 ********* 2026-03-24 03:55:01.678109 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.678115 | orchestrator | 2026-03-24 03:55:01.678131 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-24 03:55:01.678137 | orchestrator | Tuesday 24 March 2026 03:54:54 +0000 (0:00:00.122) 0:00:07.048 ********* 2026-03-24 03:55:01.678143 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:55:01.678149 | orchestrator | 2026-03-24 03:55:01.678155 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-24 03:55:01.678162 | orchestrator | Tuesday 24 March 2026 03:54:56 +0000 (0:00:01.987) 0:00:09.036 ********* 2026-03-24 03:55:01.678174 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.678181 | orchestrator | 2026-03-24 03:55:01.678188 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-24 03:55:01.678194 | orchestrator | Tuesday 24 March 2026 03:54:56 +0000 (0:00:00.446) 0:00:09.482 ********* 2026-03-24 03:55:01.678200 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.678206 | orchestrator | 2026-03-24 03:55:01.678210 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-24 03:55:01.678214 | orchestrator | Tuesday 24 March 2026 03:54:57 +0000 (0:00:00.316) 0:00:09.798 ********* 2026-03-24 03:55:01.678218 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.678222 | orchestrator | 2026-03-24 03:55:01.678226 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-24 03:55:01.678230 | orchestrator | Tuesday 24 March 2026 03:54:57 +0000 (0:00:00.139) 0:00:09.938 ********* 2026-03-24 03:55:01.678234 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:55:01.678240 | orchestrator | 2026-03-24 03:55:01.678246 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-24 03:55:01.678252 | orchestrator | Tuesday 24 March 2026 03:54:57 +0000 (0:00:00.141) 0:00:10.080 ********* 2026-03-24 03:55:01.678258 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:01.678265 | orchestrator | 2026-03-24 03:55:01.678271 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-24 03:55:01.678278 | orchestrator | Tuesday 24 March 2026 03:54:57 +0000 (0:00:00.268) 0:00:10.348 ********* 2026-03-24 03:55:01.678303 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:55:01.678310 | orchestrator | 2026-03-24 03:55:01.678316 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-24 03:55:01.678320 | orchestrator | Tuesday 24 March 2026 03:54:57 +0000 (0:00:00.239) 0:00:10.588 ********* 2026-03-24 03:55:01.678324 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:01.678328 | orchestrator | 2026-03-24 03:55:01.678332 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-24 03:55:01.678362 | orchestrator | Tuesday 24 March 2026 03:54:59 +0000 (0:00:01.239) 0:00:11.828 ********* 2026-03-24 03:55:01.678367 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:01.678371 | orchestrator | 2026-03-24 03:55:01.678375 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-24 03:55:01.678379 | orchestrator | Tuesday 24 March 2026 03:54:59 +0000 (0:00:00.239) 0:00:12.067 ********* 2026-03-24 03:55:01.678383 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:01.678387 | orchestrator | 2026-03-24 03:55:01.678391 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:01.678394 | orchestrator | Tuesday 24 March 2026 03:54:59 +0000 (0:00:00.244) 0:00:12.312 ********* 2026-03-24 03:55:01.678400 | orchestrator | 2026-03-24 03:55:01.678406 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:01.678412 | orchestrator | Tuesday 24 March 2026 03:54:59 +0000 (0:00:00.079) 0:00:12.391 ********* 2026-03-24 03:55:01.678417 | orchestrator | 2026-03-24 03:55:01.678423 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:01.678429 | orchestrator | Tuesday 24 March 2026 03:54:59 +0000 (0:00:00.066) 0:00:12.458 ********* 2026-03-24 03:55:01.678434 | orchestrator | 2026-03-24 03:55:01.678440 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-24 03:55:01.678447 | orchestrator | Tuesday 24 March 2026 03:54:59 +0000 (0:00:00.221) 0:00:12.680 ********* 2026-03-24 03:55:01.678453 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:01.678460 | orchestrator | 2026-03-24 03:55:01.678471 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-24 03:55:01.678477 | orchestrator | Tuesday 24 March 2026 03:55:01 +0000 (0:00:01.273) 0:00:13.953 ********* 2026-03-24 03:55:01.678481 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-24 03:55:01.678485 | orchestrator |  "msg": [ 2026-03-24 03:55:01.678489 | orchestrator |  "Validator run completed.", 2026-03-24 03:55:01.678493 | orchestrator |  "You can find the report file here:", 2026-03-24 03:55:01.678497 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-24T03:54:48+00:00-report.json", 2026-03-24 03:55:01.678502 | orchestrator |  "on the following host:", 2026-03-24 03:55:01.678506 | orchestrator |  "testbed-manager" 2026-03-24 03:55:01.678510 | orchestrator |  ] 2026-03-24 03:55:01.678514 | orchestrator | } 2026-03-24 03:55:01.678519 | orchestrator | 2026-03-24 03:55:01.678522 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:55:01.678527 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-24 03:55:01.678533 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:55:01.678544 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:55:01.964175 | orchestrator | 2026-03-24 03:55:01.964298 | orchestrator | 2026-03-24 03:55:01.964320 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:55:01.964403 | orchestrator | Tuesday 24 March 2026 03:55:01 +0000 (0:00:00.403) 0:00:14.357 ********* 2026-03-24 03:55:01.964425 | orchestrator | =============================================================================== 2026-03-24 03:55:01.964446 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2026-03-24 03:55:01.964460 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2026-03-24 03:55:01.964471 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2026-03-24 03:55:01.964481 | orchestrator | Get container info ------------------------------------------------------ 1.10s 2026-03-24 03:55:01.964493 | orchestrator | Create report output directory ------------------------------------------ 0.93s 2026-03-24 03:55:01.964531 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-03-24 03:55:01.964551 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2026-03-24 03:55:01.964580 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.45s 2026-03-24 03:55:01.964599 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.43s 2026-03-24 03:55:01.964617 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-03-24 03:55:01.964635 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2026-03-24 03:55:01.964652 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-03-24 03:55:01.964669 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-03-24 03:55:01.964685 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2026-03-24 03:55:01.964703 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-03-24 03:55:01.964723 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-03-24 03:55:01.964742 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2026-03-24 03:55:01.964761 | orchestrator | Print report file information ------------------------------------------- 0.25s 2026-03-24 03:55:01.964778 | orchestrator | Aggregate test results step three --------------------------------------- 0.24s 2026-03-24 03:55:01.964798 | orchestrator | Aggregate test results step three --------------------------------------- 0.24s 2026-03-24 03:55:02.230989 | orchestrator | + osism validate ceph-osds 2026-03-24 03:55:21.943939 | orchestrator | 2026-03-24 03:55:21.944054 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-24 03:55:21.944074 | orchestrator | 2026-03-24 03:55:21.944087 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-24 03:55:21.944110 | orchestrator | Tuesday 24 March 2026 03:55:18 +0000 (0:00:00.416) 0:00:00.416 ********* 2026-03-24 03:55:21.944126 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:21.944141 | orchestrator | 2026-03-24 03:55:21.944156 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-24 03:55:21.944170 | orchestrator | Tuesday 24 March 2026 03:55:19 +0000 (0:00:00.694) 0:00:01.111 ********* 2026-03-24 03:55:21.944184 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:21.944198 | orchestrator | 2026-03-24 03:55:21.944210 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-24 03:55:21.944222 | orchestrator | Tuesday 24 March 2026 03:55:19 +0000 (0:00:00.385) 0:00:01.497 ********* 2026-03-24 03:55:21.944237 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:21.944252 | orchestrator | 2026-03-24 03:55:21.944267 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-24 03:55:21.944282 | orchestrator | Tuesday 24 March 2026 03:55:20 +0000 (0:00:00.606) 0:00:02.103 ********* 2026-03-24 03:55:21.944297 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:21.944312 | orchestrator | 2026-03-24 03:55:21.944396 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-24 03:55:21.944415 | orchestrator | Tuesday 24 March 2026 03:55:20 +0000 (0:00:00.096) 0:00:02.200 ********* 2026-03-24 03:55:21.944427 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:21.944436 | orchestrator | 2026-03-24 03:55:21.944462 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-24 03:55:21.944473 | orchestrator | Tuesday 24 March 2026 03:55:20 +0000 (0:00:00.115) 0:00:02.315 ********* 2026-03-24 03:55:21.944483 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:21.944493 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:21.944503 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:21.944514 | orchestrator | 2026-03-24 03:55:21.944524 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-24 03:55:21.944553 | orchestrator | Tuesday 24 March 2026 03:55:20 +0000 (0:00:00.274) 0:00:02.590 ********* 2026-03-24 03:55:21.944563 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:21.944573 | orchestrator | 2026-03-24 03:55:21.944587 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-24 03:55:21.944600 | orchestrator | Tuesday 24 March 2026 03:55:20 +0000 (0:00:00.116) 0:00:02.707 ********* 2026-03-24 03:55:21.944613 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:21.944628 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:21.944641 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:21.944655 | orchestrator | 2026-03-24 03:55:21.944667 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-24 03:55:21.944675 | orchestrator | Tuesday 24 March 2026 03:55:20 +0000 (0:00:00.264) 0:00:02.972 ********* 2026-03-24 03:55:21.944683 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:21.944691 | orchestrator | 2026-03-24 03:55:21.944699 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-24 03:55:21.944708 | orchestrator | Tuesday 24 March 2026 03:55:21 +0000 (0:00:00.552) 0:00:03.524 ********* 2026-03-24 03:55:21.944715 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:21.944723 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:21.944731 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:21.944739 | orchestrator | 2026-03-24 03:55:21.944747 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-24 03:55:21.944755 | orchestrator | Tuesday 24 March 2026 03:55:21 +0000 (0:00:00.269) 0:00:03.793 ********* 2026-03-24 03:55:21.944765 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1a3247de722dd537ed278fe06880aa941600feb8a8fc09b90fce87620b8174b2', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-24 03:55:21.944776 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd8a823890e86b4108034fe4e1399b1228ab3ab751daf122861d7dd24064e1f9d', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-24 03:55:21.944785 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5ccaed30da5f6404ef86f8b1dfe1ea67b3dadd8c98dc29bd74fb1c5d13d357d5', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-24 03:55:21.944794 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c99ce79d2356548caf175b7f074819401ecd41cb44ea5c712bdabf4724a2bff', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-24 03:55:21.944802 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e7bd94d61c61bb15424b2ee6e3303e6b8afa55329a68e1cd713b10c515b02dd7', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 38 minutes (healthy)'})  2026-03-24 03:55:21.944834 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb8e04f5341fc000251ef3d6ac8228d29e8262eaaa20be06d7bd5497367473e0', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-03-24 03:55:21.944843 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bd9652cb09124eae8719902ce27a4597fbc0d78dbab6e50dbfa05c75b499d870', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-03-24 03:55:21.944851 | orchestrator | skipping: [testbed-node-3] => (item={'id': '123f1a6f9cf5be98c6d2956e5913aaa19c67a08cadd9d5390327ef92c1b311ff', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-03-24 03:55:21.944867 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e3c5e27f5a3812b19f3ad99efc35bc0fa5323209d35db53981ff209172663d5b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:21.944881 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e71305b9c6d50728ddd694513bfb8f21cb0d88a0c9e1f57c9a80cb32b6d2b4d0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:21.944890 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5b1c6b9dea7289ef36ab17f5b024bf5f43f942800d7c2f870dc225206f6fee26', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:21.944900 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ffcaee239da688fccd6c6f2f5fe4897db04559e3b6c41f68189ecf3f70455a5c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-24 03:55:21.944908 | orchestrator | ok: [testbed-node-3] => (item={'id': '4c01050ac186e5421571a5767442b18e1ab718f9e99b95287caab533f0515647', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-24 03:55:21.944917 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0e7676187ff94af5f24f2619cd7549a25825a44dd60c86f3c5bcf9b28de8e400', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:21.944925 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47ce1f431b115f292c1fdce6c9f0746e9d828693db2d060ff5da0abf3ab3a42b', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-24 03:55:21.944933 | orchestrator | skipping: [testbed-node-3] => (item={'id': '63ac06ff61474dbf7c0f015d7ea2275fee83a3946c4755e232dd2ed3f895eaa6', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-24 03:55:21.944941 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f2929fadd9090dddf9580631b7f25de33afd51999e04b55f6ce0c49c48aa353d', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:21.944950 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'db041f8c3509a7ec2a5013fdb053b616e45d84a33cd318cc105697c6f298de05', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:21.944958 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ccd557fb6d299f06bf05a23636830d056c45922f0b50a3e6e18315fe4efbb0b2', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:21.944967 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2e580e4fb00a8dc83347e052d724d0b85ce33d400f7a4543bad229d2e1e6ce90', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-24 03:55:21.944981 | orchestrator | skipping: [testbed-node-4] => (item={'id': '603ad53b4778bf9e2b0590daa2a708e6472f8c21981500e1b47720a77b11dc20', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-24 03:55:22.068061 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9cdd01f7d8c5245554060dae8e2b4a88c4204421dff49fc986cdfbc891c55ccb', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-24 03:55:22.068176 | orchestrator | skipping: [testbed-node-4] => (item={'id': '16f37341ffe6e7ae466a49033a93fe6d675c9e7b0fa22f6bb25dd0088dc7a32a', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-24 03:55:22.068185 | orchestrator | skipping: [testbed-node-4] => (item={'id': '93634be8ea4cc6904e482857baa251c6e3aa5d65bfecab15913d88d5efb35d1f', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 38 minutes (healthy)'})  2026-03-24 03:55:22.068192 | orchestrator | skipping: [testbed-node-4] => (item={'id': '41e6885f11a137e92feac59f46b02b68b38638175be33e287a373d9fe2fe6e19', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-03-24 03:55:22.068196 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e2a8ee1abd68b75edf64ec7320acc65b851dcb01a72b8b3e7782e03e4eced93d', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-03-24 03:55:22.068233 | orchestrator | skipping: [testbed-node-4] => (item={'id': '319afffc64f4b2e1163d9727bc8061ff0033e34898dbf3561f28d847b6026047', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-03-24 03:55:22.068238 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffbaefb925af79f41e70eca1712c57593a3c484d697e92a0438344796fc3ecf8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:22.068243 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c63ff0f86c8f49037753b2c64f615c03bd3d3666ec28ef559777a5cbd4a3239', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:22.068248 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1320f93e87c0af1bd8e8d77934d38aa490ef2f4a0463855a22284325a03bcf0f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:22.068253 | orchestrator | ok: [testbed-node-4] => (item={'id': 'fc0a9f20eaed59343e794d435913e850e8a56ee47e28c6be579b48428e0e758b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-24 03:55:22.068258 | orchestrator | ok: [testbed-node-4] => (item={'id': '93b7bea718f491e6422ac41cba92ebd9749377fd90ad2dfb10b4c6f17047a7ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-24 03:55:22.068263 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7c093f888d676579429fd55fdb3050a3e6c9f246c2982629cc4a8a0723a260dd', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:22.068267 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd2ea28615ae24e0c3c2a00a6f694fd2ab0dcc6c24a29c2c88f669f10ee2fb022', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-24 03:55:22.068271 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd0229870ffa9595766d7dc252dd58649e1bf94b9489d17329c6e78939097615a', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-24 03:55:22.068291 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ac1c15ec3d0f0e0607b9df7785c6dabab381df13f2cd18b079c1335bf8186863', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:22.068296 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a0278eab8e768c257f07a007689e52174ca0d5ae6b7c0467709f80e1432fbd99', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:22.068300 | orchestrator | skipping: [testbed-node-4] => (item={'id': '017dc67ce1a022620fa5cceced5975c39bfdca3316bdaf496984191dc4995094', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:22.068304 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd3cc02c5ed7ff9313912e993380310a56d6ff1dc0f767bc89f146e7816bd7975', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-24 03:55:22.068311 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7f3d4ac85055f62bb147c6ba292dbffcef4b81ab7e3343360a0ba3fed0623074', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-24 03:55:22.068315 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ad464a5dee60cbadf6558a17f6c646437a7e5d0e4ce09259993fe257bdc24aec', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-24 03:55:22.068318 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9dcd0144ec8bfb1ac4ae6dfdebc03889df1ceee286019dd8d2ba9857b561357e', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-03-24 03:55:22.068322 | orchestrator | skipping: [testbed-node-5] => (item={'id': '97018ad661fc57d4b1522c9045796183b0e20b8618c3c8cd24e475578f34f563', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 38 minutes (healthy)'})  2026-03-24 03:55:22.068326 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2c200d3ec0375f674311e1d6b75a3c11a1b1c6caf69fcc3f18ec044126b584b8', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-03-24 03:55:22.068362 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'abe9698c62a6b9b7110870fc33d643191168cd65271cb41172462ed8b2e7b003', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-03-24 03:55:22.068366 | orchestrator | skipping: [testbed-node-5] => (item={'id': '766da81af480bd868180d4a20906f0f55076f23bd98b2160375b9c2ecad59169', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-03-24 03:55:22.068370 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9ba641310453a8fa6ea45456300ba61844d829abc469ace5ee4487f8f7a6196a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:22.068374 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cc28ecc8c7ccc21f821f22ac34defe9b775345b805acd94940a20b3c0801151f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:22.068383 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4f1771bf11f5e04dbd92a892b3486e1b93d43ef84325d2f4e23a819b4b82a7fa', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:22.068387 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd47361567a945e5ab074eb2fa7bf634e52c607bb15d0da5c2dc26df22cac81bb', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-24 03:55:22.068395 | orchestrator | ok: [testbed-node-5] => (item={'id': '7aa90e2f8f403240daa57fff5528b1ee7361f648abf17b2fda9ad889c215b7e9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-24 03:55:32.572888 | orchestrator | skipping: [testbed-node-5] => (item={'id': '44b377162ab35661cd842773f5faa1d558ad1c6ad7a4f365631ad5924bc773a4', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-24 03:55:32.572994 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6e814da06b3bd4a100b6ffeb49092106ae0ed67fb36da9aaef5403fba57c7825', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-24 03:55:32.573002 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7d22bb6197f4f2fe220746a2a4fcc996330281b8f92d7973d4d86dcd014ee189', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-24 03:55:32.573024 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'aba13553f2d19ab4fae43dc95210c617ce0e2346476e70efa28643e74c87d43b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:32.573029 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a80968c0c23731fe243f4daa029c28b5ac10d3d971d9e978e2f5c6c2f8ed865e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:32.573034 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4ffac58e8277bc0052fe64af750dcd1f134fc5a64d1d0ba3aeb00d97ec78e3c2', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-24 03:55:32.573038 | orchestrator | 2026-03-24 03:55:32.573043 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-24 03:55:32.573049 | orchestrator | Tuesday 24 March 2026 03:55:22 +0000 (0:00:00.428) 0:00:04.222 ********* 2026-03-24 03:55:32.573053 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573058 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:32.573062 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:32.573066 | orchestrator | 2026-03-24 03:55:32.573069 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-24 03:55:32.573073 | orchestrator | Tuesday 24 March 2026 03:55:22 +0000 (0:00:00.248) 0:00:04.471 ********* 2026-03-24 03:55:32.573077 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573082 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:32.573086 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:32.573090 | orchestrator | 2026-03-24 03:55:32.573094 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-24 03:55:32.573098 | orchestrator | Tuesday 24 March 2026 03:55:22 +0000 (0:00:00.360) 0:00:04.831 ********* 2026-03-24 03:55:32.573101 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573105 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:32.573110 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:32.573116 | orchestrator | 2026-03-24 03:55:32.573150 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-24 03:55:32.573158 | orchestrator | Tuesday 24 March 2026 03:55:23 +0000 (0:00:00.254) 0:00:05.085 ********* 2026-03-24 03:55:32.573164 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573170 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:32.573176 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:32.573182 | orchestrator | 2026-03-24 03:55:32.573187 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-24 03:55:32.573193 | orchestrator | Tuesday 24 March 2026 03:55:23 +0000 (0:00:00.253) 0:00:05.339 ********* 2026-03-24 03:55:32.573199 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-24 03:55:32.573207 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-24 03:55:32.573213 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573219 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-24 03:55:32.573225 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-24 03:55:32.573232 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:32.573238 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-24 03:55:32.573243 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-24 03:55:32.573249 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:32.573256 | orchestrator | 2026-03-24 03:55:32.573262 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-24 03:55:32.573268 | orchestrator | Tuesday 24 March 2026 03:55:23 +0000 (0:00:00.265) 0:00:05.604 ********* 2026-03-24 03:55:32.573274 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573281 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:32.573287 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:32.573293 | orchestrator | 2026-03-24 03:55:32.573300 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-24 03:55:32.573307 | orchestrator | Tuesday 24 March 2026 03:55:23 +0000 (0:00:00.388) 0:00:05.993 ********* 2026-03-24 03:55:32.573313 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573361 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:32.573368 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:32.573375 | orchestrator | 2026-03-24 03:55:32.573381 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-24 03:55:32.573387 | orchestrator | Tuesday 24 March 2026 03:55:24 +0000 (0:00:00.257) 0:00:06.250 ********* 2026-03-24 03:55:32.573394 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573401 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:32.573407 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:32.573413 | orchestrator | 2026-03-24 03:55:32.573420 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-24 03:55:32.573424 | orchestrator | Tuesday 24 March 2026 03:55:24 +0000 (0:00:00.238) 0:00:06.489 ********* 2026-03-24 03:55:32.573428 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573431 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:32.573436 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:32.573440 | orchestrator | 2026-03-24 03:55:32.573444 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-24 03:55:32.573449 | orchestrator | Tuesday 24 March 2026 03:55:24 +0000 (0:00:00.248) 0:00:06.738 ********* 2026-03-24 03:55:32.573453 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573457 | orchestrator | 2026-03-24 03:55:32.573461 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-24 03:55:32.573471 | orchestrator | Tuesday 24 March 2026 03:55:25 +0000 (0:00:00.491) 0:00:07.229 ********* 2026-03-24 03:55:32.573476 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573480 | orchestrator | 2026-03-24 03:55:32.573492 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-24 03:55:32.573497 | orchestrator | Tuesday 24 March 2026 03:55:25 +0000 (0:00:00.220) 0:00:07.450 ********* 2026-03-24 03:55:32.573501 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573505 | orchestrator | 2026-03-24 03:55:32.573509 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:32.573514 | orchestrator | Tuesday 24 March 2026 03:55:25 +0000 (0:00:00.231) 0:00:07.681 ********* 2026-03-24 03:55:32.573519 | orchestrator | 2026-03-24 03:55:32.573523 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:32.573527 | orchestrator | Tuesday 24 March 2026 03:55:25 +0000 (0:00:00.085) 0:00:07.767 ********* 2026-03-24 03:55:32.573531 | orchestrator | 2026-03-24 03:55:32.573536 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:32.573540 | orchestrator | Tuesday 24 March 2026 03:55:25 +0000 (0:00:00.094) 0:00:07.861 ********* 2026-03-24 03:55:32.573544 | orchestrator | 2026-03-24 03:55:32.573548 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-24 03:55:32.573553 | orchestrator | Tuesday 24 March 2026 03:55:25 +0000 (0:00:00.079) 0:00:07.940 ********* 2026-03-24 03:55:32.573557 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573561 | orchestrator | 2026-03-24 03:55:32.573565 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-24 03:55:32.573569 | orchestrator | Tuesday 24 March 2026 03:55:26 +0000 (0:00:00.254) 0:00:08.195 ********* 2026-03-24 03:55:32.573574 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573578 | orchestrator | 2026-03-24 03:55:32.573582 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-24 03:55:32.573586 | orchestrator | Tuesday 24 March 2026 03:55:26 +0000 (0:00:00.222) 0:00:08.418 ********* 2026-03-24 03:55:32.573591 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573595 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:32.573600 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:32.573604 | orchestrator | 2026-03-24 03:55:32.573608 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-24 03:55:32.573612 | orchestrator | Tuesday 24 March 2026 03:55:26 +0000 (0:00:00.350) 0:00:08.768 ********* 2026-03-24 03:55:32.573616 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573621 | orchestrator | 2026-03-24 03:55:32.573625 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-24 03:55:32.573629 | orchestrator | Tuesday 24 March 2026 03:55:27 +0000 (0:00:00.590) 0:00:09.358 ********* 2026-03-24 03:55:32.573634 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 03:55:32.573638 | orchestrator | 2026-03-24 03:55:32.573642 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-24 03:55:32.573647 | orchestrator | Tuesday 24 March 2026 03:55:28 +0000 (0:00:01.601) 0:00:10.960 ********* 2026-03-24 03:55:32.573651 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573655 | orchestrator | 2026-03-24 03:55:32.573660 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-24 03:55:32.573664 | orchestrator | Tuesday 24 March 2026 03:55:29 +0000 (0:00:00.131) 0:00:11.092 ********* 2026-03-24 03:55:32.573668 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573672 | orchestrator | 2026-03-24 03:55:32.573677 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-24 03:55:32.573681 | orchestrator | Tuesday 24 March 2026 03:55:29 +0000 (0:00:00.302) 0:00:11.395 ********* 2026-03-24 03:55:32.573685 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:32.573690 | orchestrator | 2026-03-24 03:55:32.573694 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-24 03:55:32.573698 | orchestrator | Tuesday 24 March 2026 03:55:29 +0000 (0:00:00.119) 0:00:11.514 ********* 2026-03-24 03:55:32.573703 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573707 | orchestrator | 2026-03-24 03:55:32.573711 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-24 03:55:32.573720 | orchestrator | Tuesday 24 March 2026 03:55:29 +0000 (0:00:00.131) 0:00:11.646 ********* 2026-03-24 03:55:32.573724 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:32.573728 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:32.573733 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:32.573737 | orchestrator | 2026-03-24 03:55:32.573742 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-24 03:55:32.573746 | orchestrator | Tuesday 24 March 2026 03:55:29 +0000 (0:00:00.299) 0:00:11.946 ********* 2026-03-24 03:55:32.573751 | orchestrator | changed: [testbed-node-3] 2026-03-24 03:55:32.573755 | orchestrator | changed: [testbed-node-4] 2026-03-24 03:55:32.573760 | orchestrator | changed: [testbed-node-5] 2026-03-24 03:55:42.016663 | orchestrator | 2026-03-24 03:55:42.016761 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-24 03:55:42.016770 | orchestrator | Tuesday 24 March 2026 03:55:32 +0000 (0:00:02.696) 0:00:14.642 ********* 2026-03-24 03:55:42.016776 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:42.016783 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:42.016788 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:42.016793 | orchestrator | 2026-03-24 03:55:42.016798 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-24 03:55:42.016804 | orchestrator | Tuesday 24 March 2026 03:55:32 +0000 (0:00:00.300) 0:00:14.943 ********* 2026-03-24 03:55:42.016809 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:42.016814 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:42.016818 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:42.016823 | orchestrator | 2026-03-24 03:55:42.016828 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-24 03:55:42.016833 | orchestrator | Tuesday 24 March 2026 03:55:33 +0000 (0:00:00.489) 0:00:15.432 ********* 2026-03-24 03:55:42.016838 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:42.016844 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:42.016851 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:42.016859 | orchestrator | 2026-03-24 03:55:42.016867 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-24 03:55:42.016875 | orchestrator | Tuesday 24 March 2026 03:55:33 +0000 (0:00:00.279) 0:00:15.711 ********* 2026-03-24 03:55:42.016883 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:42.016890 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:42.016898 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:42.016905 | orchestrator | 2026-03-24 03:55:42.016913 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-24 03:55:42.016921 | orchestrator | Tuesday 24 March 2026 03:55:34 +0000 (0:00:00.485) 0:00:16.197 ********* 2026-03-24 03:55:42.016929 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:42.016937 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:42.016945 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:42.016954 | orchestrator | 2026-03-24 03:55:42.016978 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-24 03:55:42.016987 | orchestrator | Tuesday 24 March 2026 03:55:34 +0000 (0:00:00.278) 0:00:16.476 ********* 2026-03-24 03:55:42.016996 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:42.017005 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:42.017013 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:42.017022 | orchestrator | 2026-03-24 03:55:42.017030 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-24 03:55:42.017039 | orchestrator | Tuesday 24 March 2026 03:55:34 +0000 (0:00:00.281) 0:00:16.758 ********* 2026-03-24 03:55:42.017048 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:42.017057 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:42.017065 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:42.017073 | orchestrator | 2026-03-24 03:55:42.017082 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-24 03:55:42.017104 | orchestrator | Tuesday 24 March 2026 03:55:35 +0000 (0:00:00.467) 0:00:17.225 ********* 2026-03-24 03:55:42.017142 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:42.017151 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:42.017159 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:42.017168 | orchestrator | 2026-03-24 03:55:42.017177 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-24 03:55:42.017186 | orchestrator | Tuesday 24 March 2026 03:55:35 +0000 (0:00:00.733) 0:00:17.959 ********* 2026-03-24 03:55:42.017194 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:42.017203 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:42.017212 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:42.017221 | orchestrator | 2026-03-24 03:55:42.017232 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-24 03:55:42.017241 | orchestrator | Tuesday 24 March 2026 03:55:36 +0000 (0:00:00.287) 0:00:18.247 ********* 2026-03-24 03:55:42.017251 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:42.017261 | orchestrator | skipping: [testbed-node-4] 2026-03-24 03:55:42.017270 | orchestrator | skipping: [testbed-node-5] 2026-03-24 03:55:42.017280 | orchestrator | 2026-03-24 03:55:42.017288 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-24 03:55:42.017297 | orchestrator | Tuesday 24 March 2026 03:55:36 +0000 (0:00:00.289) 0:00:18.536 ********* 2026-03-24 03:55:42.017305 | orchestrator | ok: [testbed-node-3] 2026-03-24 03:55:42.017314 | orchestrator | ok: [testbed-node-4] 2026-03-24 03:55:42.017390 | orchestrator | ok: [testbed-node-5] 2026-03-24 03:55:42.017398 | orchestrator | 2026-03-24 03:55:42.017406 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-24 03:55:42.017415 | orchestrator | Tuesday 24 March 2026 03:55:36 +0000 (0:00:00.476) 0:00:19.013 ********* 2026-03-24 03:55:42.017423 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:42.017432 | orchestrator | 2026-03-24 03:55:42.017440 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-24 03:55:42.017448 | orchestrator | Tuesday 24 March 2026 03:55:37 +0000 (0:00:00.255) 0:00:19.269 ********* 2026-03-24 03:55:42.017456 | orchestrator | skipping: [testbed-node-3] 2026-03-24 03:55:42.017464 | orchestrator | 2026-03-24 03:55:42.017472 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-24 03:55:42.017480 | orchestrator | Tuesday 24 March 2026 03:55:37 +0000 (0:00:00.249) 0:00:19.518 ********* 2026-03-24 03:55:42.017488 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:42.017497 | orchestrator | 2026-03-24 03:55:42.017505 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-24 03:55:42.017514 | orchestrator | Tuesday 24 March 2026 03:55:39 +0000 (0:00:01.593) 0:00:21.111 ********* 2026-03-24 03:55:42.017522 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:42.017530 | orchestrator | 2026-03-24 03:55:42.017538 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-24 03:55:42.017546 | orchestrator | Tuesday 24 March 2026 03:55:39 +0000 (0:00:00.275) 0:00:21.387 ********* 2026-03-24 03:55:42.017555 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:42.017562 | orchestrator | 2026-03-24 03:55:42.017590 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:42.017598 | orchestrator | Tuesday 24 March 2026 03:55:39 +0000 (0:00:00.239) 0:00:21.627 ********* 2026-03-24 03:55:42.017603 | orchestrator | 2026-03-24 03:55:42.017608 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:42.017613 | orchestrator | Tuesday 24 March 2026 03:55:39 +0000 (0:00:00.067) 0:00:21.694 ********* 2026-03-24 03:55:42.017618 | orchestrator | 2026-03-24 03:55:42.017624 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-24 03:55:42.017632 | orchestrator | Tuesday 24 March 2026 03:55:39 +0000 (0:00:00.067) 0:00:21.762 ********* 2026-03-24 03:55:42.017640 | orchestrator | 2026-03-24 03:55:42.017648 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-24 03:55:42.017665 | orchestrator | Tuesday 24 March 2026 03:55:39 +0000 (0:00:00.070) 0:00:21.833 ********* 2026-03-24 03:55:42.017674 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-24 03:55:42.017681 | orchestrator | 2026-03-24 03:55:42.017689 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-24 03:55:42.017697 | orchestrator | Tuesday 24 March 2026 03:55:41 +0000 (0:00:01.444) 0:00:23.277 ********* 2026-03-24 03:55:42.017711 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-24 03:55:42.017720 | orchestrator |  "msg": [ 2026-03-24 03:55:42.017725 | orchestrator |  "Validator run completed.", 2026-03-24 03:55:42.017730 | orchestrator |  "You can find the report file here:", 2026-03-24 03:55:42.017735 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-24T03:55:18+00:00-report.json", 2026-03-24 03:55:42.017742 | orchestrator |  "on the following host:", 2026-03-24 03:55:42.017746 | orchestrator |  "testbed-manager" 2026-03-24 03:55:42.017752 | orchestrator |  ] 2026-03-24 03:55:42.017756 | orchestrator | } 2026-03-24 03:55:42.017761 | orchestrator | 2026-03-24 03:55:42.017766 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:55:42.017772 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 03:55:42.017779 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-24 03:55:42.017784 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-24 03:55:42.017789 | orchestrator | 2026-03-24 03:55:42.017793 | orchestrator | 2026-03-24 03:55:42.017798 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:55:42.017803 | orchestrator | Tuesday 24 March 2026 03:55:41 +0000 (0:00:00.547) 0:00:23.825 ********* 2026-03-24 03:55:42.017808 | orchestrator | =============================================================================== 2026-03-24 03:55:42.017813 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.70s 2026-03-24 03:55:42.017817 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.60s 2026-03-24 03:55:42.017822 | orchestrator | Aggregate test results step one ----------------------------------------- 1.59s 2026-03-24 03:55:42.017827 | orchestrator | Write report file ------------------------------------------------------- 1.44s 2026-03-24 03:55:42.017832 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.73s 2026-03-24 03:55:42.017836 | orchestrator | Get timestamp for report file ------------------------------------------- 0.69s 2026-03-24 03:55:42.017841 | orchestrator | Create report output directory ------------------------------------------ 0.61s 2026-03-24 03:55:42.017846 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.59s 2026-03-24 03:55:42.017851 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.55s 2026-03-24 03:55:42.017855 | orchestrator | Print report file information ------------------------------------------- 0.55s 2026-03-24 03:55:42.017860 | orchestrator | Aggregate test results step one ----------------------------------------- 0.49s 2026-03-24 03:55:42.017865 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2026-03-24 03:55:42.017870 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.49s 2026-03-24 03:55:42.017874 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.48s 2026-03-24 03:55:42.017879 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2026-03-24 03:55:42.017884 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.43s 2026-03-24 03:55:42.017889 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.39s 2026-03-24 03:55:42.017897 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.39s 2026-03-24 03:55:42.017902 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.36s 2026-03-24 03:55:42.017907 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2026-03-24 03:55:42.281442 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-24 03:55:42.289552 | orchestrator | + set -e 2026-03-24 03:55:42.289632 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 03:55:42.290514 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 03:55:42.290544 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 03:55:42.290651 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 03:55:42.290697 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 03:55:42.290772 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 03:55:42.290938 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 03:55:42.290952 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 03:55:42.290967 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 03:55:42.290980 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 03:55:42.291057 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 03:55:42.291068 | orchestrator | ++ export ARA=false 2026-03-24 03:55:42.291079 | orchestrator | ++ ARA=false 2026-03-24 03:55:42.291089 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 03:55:42.291099 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 03:55:42.291110 | orchestrator | ++ export TEMPEST=false 2026-03-24 03:55:42.291117 | orchestrator | ++ TEMPEST=false 2026-03-24 03:55:42.291123 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 03:55:42.291129 | orchestrator | ++ IS_ZUUL=true 2026-03-24 03:55:42.291138 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 03:55:42.291149 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 03:55:42.291158 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 03:55:42.291168 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 03:55:42.291177 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 03:55:42.291187 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 03:55:42.291198 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 03:55:42.291209 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 03:55:42.291228 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 03:55:42.291235 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 03:55:42.291241 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-24 03:55:42.291247 | orchestrator | + source /etc/os-release 2026-03-24 03:55:42.291254 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-24 03:55:42.291260 | orchestrator | ++ NAME=Ubuntu 2026-03-24 03:55:42.291266 | orchestrator | ++ VERSION_ID=24.04 2026-03-24 03:55:42.291273 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-24 03:55:42.291279 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-24 03:55:42.291285 | orchestrator | ++ ID=ubuntu 2026-03-24 03:55:42.291291 | orchestrator | ++ ID_LIKE=debian 2026-03-24 03:55:42.291297 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-24 03:55:42.291303 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-24 03:55:42.291310 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-24 03:55:42.291316 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-24 03:55:42.291359 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-24 03:55:42.291366 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-24 03:55:42.291372 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-24 03:55:42.291379 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-24 03:55:42.291386 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-24 03:55:42.320780 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-24 03:56:02.929409 | orchestrator | 2026-03-24 03:56:02.929496 | orchestrator | # Status of Elasticsearch 2026-03-24 03:56:02.929507 | orchestrator | 2026-03-24 03:56:02.929514 | orchestrator | + pushd /opt/configuration/contrib 2026-03-24 03:56:02.929521 | orchestrator | + echo 2026-03-24 03:56:02.929529 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-24 03:56:02.929535 | orchestrator | + echo 2026-03-24 03:56:02.929541 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-24 03:56:03.126364 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-24 03:56:03.126484 | orchestrator | 2026-03-24 03:56:03.126497 | orchestrator | # Status of MariaDB 2026-03-24 03:56:03.126505 | orchestrator | 2026-03-24 03:56:03.126513 | orchestrator | + echo 2026-03-24 03:56:03.126520 | orchestrator | + echo '# Status of MariaDB' 2026-03-24 03:56:03.126527 | orchestrator | + echo 2026-03-24 03:56:03.126964 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-24 03:56:03.173236 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-24 03:56:03.173384 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-24 03:56:03.173399 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-24 03:56:03.173408 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-24 03:56:03.237096 | orchestrator | Reading package lists... 2026-03-24 03:56:03.545549 | orchestrator | Building dependency tree... 2026-03-24 03:56:03.546170 | orchestrator | Reading state information... 2026-03-24 03:56:03.894244 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-24 03:56:03.894391 | orchestrator | bc set to manually installed. 2026-03-24 03:56:03.894408 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-24 03:56:04.500107 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-24 03:56:04.500195 | orchestrator | 2026-03-24 03:56:04.500211 | orchestrator | # Status of Prometheus 2026-03-24 03:56:04.500222 | orchestrator | 2026-03-24 03:56:04.500232 | orchestrator | + echo 2026-03-24 03:56:04.500242 | orchestrator | + echo '# Status of Prometheus' 2026-03-24 03:56:04.500252 | orchestrator | + echo 2026-03-24 03:56:04.500262 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-24 03:56:04.551184 | orchestrator | Unauthorized 2026-03-24 03:56:04.551978 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-24 03:56:04.622712 | orchestrator | Unauthorized 2026-03-24 03:56:04.626764 | orchestrator | 2026-03-24 03:56:04.626851 | orchestrator | # Status of RabbitMQ 2026-03-24 03:56:04.626864 | orchestrator | 2026-03-24 03:56:04.626874 | orchestrator | + echo 2026-03-24 03:56:04.626884 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-24 03:56:04.626893 | orchestrator | + echo 2026-03-24 03:56:04.627282 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-24 03:56:04.677513 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-24 03:56:04.677589 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-24 03:56:04.677599 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-24 03:56:05.104086 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-24 03:56:05.113709 | orchestrator | 2026-03-24 03:56:05.113783 | orchestrator | # Status of Redis 2026-03-24 03:56:05.113793 | orchestrator | 2026-03-24 03:56:05.113799 | orchestrator | + echo 2026-03-24 03:56:05.113805 | orchestrator | + echo '# Status of Redis' 2026-03-24 03:56:05.113812 | orchestrator | + echo 2026-03-24 03:56:05.113820 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-24 03:56:05.117707 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001446s;;;0.000000;10.000000 2026-03-24 03:56:05.117772 | orchestrator | 2026-03-24 03:56:05.117782 | orchestrator | # Create backup of MariaDB database 2026-03-24 03:56:05.117789 | orchestrator | 2026-03-24 03:56:05.117795 | orchestrator | + popd 2026-03-24 03:56:05.117802 | orchestrator | + echo 2026-03-24 03:56:05.117808 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-24 03:56:05.117815 | orchestrator | + echo 2026-03-24 03:56:05.117822 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-24 03:56:07.027531 | orchestrator | 2026-03-24 03:56:07 | INFO  | Task a5fe967d-fd62-49c2-bd73-50eb3e2aa329 (mariadb_backup) was prepared for execution. 2026-03-24 03:56:07.027596 | orchestrator | 2026-03-24 03:56:07 | INFO  | It takes a moment until task a5fe967d-fd62-49c2-bd73-50eb3e2aa329 (mariadb_backup) has been started and output is visible here. 2026-03-24 03:58:43.739128 | orchestrator | 2026-03-24 03:58:43.739238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 03:58:43.739342 | orchestrator | 2026-03-24 03:58:43.739353 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 03:58:43.739382 | orchestrator | Tuesday 24 March 2026 03:56:10 +0000 (0:00:00.131) 0:00:00.131 ********* 2026-03-24 03:58:43.739391 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:58:43.739400 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:58:43.739408 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:58:43.739416 | orchestrator | 2026-03-24 03:58:43.739424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 03:58:43.739432 | orchestrator | Tuesday 24 March 2026 03:56:11 +0000 (0:00:00.292) 0:00:00.423 ********* 2026-03-24 03:58:43.739440 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-24 03:58:43.739449 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-24 03:58:43.739457 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-24 03:58:43.739465 | orchestrator | 2026-03-24 03:58:43.739472 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-24 03:58:43.739480 | orchestrator | 2026-03-24 03:58:43.739488 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-24 03:58:43.739496 | orchestrator | Tuesday 24 March 2026 03:56:11 +0000 (0:00:00.419) 0:00:00.843 ********* 2026-03-24 03:58:43.739504 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 03:58:43.739512 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 03:58:43.739520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 03:58:43.739528 | orchestrator | 2026-03-24 03:58:43.739536 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 03:58:43.739558 | orchestrator | Tuesday 24 March 2026 03:56:11 +0000 (0:00:00.358) 0:00:01.201 ********* 2026-03-24 03:58:43.739567 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 03:58:43.739576 | orchestrator | 2026-03-24 03:58:43.739584 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-24 03:58:43.739592 | orchestrator | Tuesday 24 March 2026 03:56:12 +0000 (0:00:00.465) 0:00:01.666 ********* 2026-03-24 03:58:43.739599 | orchestrator | ok: [testbed-node-0] 2026-03-24 03:58:43.739607 | orchestrator | ok: [testbed-node-1] 2026-03-24 03:58:43.739615 | orchestrator | ok: [testbed-node-2] 2026-03-24 03:58:43.739623 | orchestrator | 2026-03-24 03:58:43.739631 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-24 03:58:43.739640 | orchestrator | Tuesday 24 March 2026 03:56:15 +0000 (0:00:02.864) 0:00:04.530 ********* 2026-03-24 03:58:43.739649 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:58:43.739659 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:58:43.739668 | orchestrator | 2026-03-24 03:58:43.739677 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-24 03:58:43.739686 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-24 03:58:43.739694 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-24 03:58:43.739703 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-24 03:58:43.739712 | orchestrator | mariadb_bootstrap_restart 2026-03-24 03:58:43.739721 | orchestrator | changed: [testbed-node-0] 2026-03-24 03:58:43.739730 | orchestrator | 2026-03-24 03:58:43.739739 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-24 03:58:43.739748 | orchestrator | skipping: no hosts matched 2026-03-24 03:58:43.739757 | orchestrator | 2026-03-24 03:58:43.739766 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-24 03:58:43.739774 | orchestrator | skipping: no hosts matched 2026-03-24 03:58:43.739781 | orchestrator | 2026-03-24 03:58:43.739789 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-24 03:58:43.739797 | orchestrator | skipping: no hosts matched 2026-03-24 03:58:43.739804 | orchestrator | 2026-03-24 03:58:43.739812 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-24 03:58:43.739820 | orchestrator | 2026-03-24 03:58:43.739835 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-24 03:58:43.739847 | orchestrator | Tuesday 24 March 2026 03:58:42 +0000 (0:02:27.611) 0:02:32.142 ********* 2026-03-24 03:58:43.739859 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:58:43.739872 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:58:43.739884 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:58:43.739898 | orchestrator | 2026-03-24 03:58:43.739912 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-24 03:58:43.739925 | orchestrator | Tuesday 24 March 2026 03:58:43 +0000 (0:00:00.324) 0:02:32.466 ********* 2026-03-24 03:58:43.739935 | orchestrator | skipping: [testbed-node-0] 2026-03-24 03:58:43.739943 | orchestrator | skipping: [testbed-node-1] 2026-03-24 03:58:43.739950 | orchestrator | skipping: [testbed-node-2] 2026-03-24 03:58:43.739958 | orchestrator | 2026-03-24 03:58:43.739966 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 03:58:43.739975 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 03:58:43.739984 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 03:58:43.739993 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 03:58:43.740001 | orchestrator | 2026-03-24 03:58:43.740009 | orchestrator | 2026-03-24 03:58:43.740016 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 03:58:43.740025 | orchestrator | Tuesday 24 March 2026 03:58:43 +0000 (0:00:00.386) 0:02:32.852 ********* 2026-03-24 03:58:43.740033 | orchestrator | =============================================================================== 2026-03-24 03:58:43.740066 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 147.61s 2026-03-24 03:58:43.740081 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.86s 2026-03-24 03:58:43.740093 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.47s 2026-03-24 03:58:43.740106 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-24 03:58:43.740119 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2026-03-24 03:58:43.740133 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.36s 2026-03-24 03:58:43.740146 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-03-24 03:58:43.740161 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-03-24 03:58:44.016207 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-24 03:58:44.027128 | orchestrator | + set -e 2026-03-24 03:58:44.027201 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 03:58:44.027781 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 03:58:44.027799 | orchestrator | ++ INTERACTIVE=false 2026-03-24 03:58:44.027807 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 03:58:44.027816 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 03:58:44.028407 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-24 03:58:44.030131 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-24 03:58:44.036117 | orchestrator | 2026-03-24 03:58:44.036212 | orchestrator | # OpenStack endpoints 2026-03-24 03:58:44.036227 | orchestrator | 2026-03-24 03:58:44.036239 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 03:58:44.036271 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 03:58:44.036297 | orchestrator | + export OS_CLOUD=admin 2026-03-24 03:58:44.036322 | orchestrator | + OS_CLOUD=admin 2026-03-24 03:58:44.036334 | orchestrator | + echo 2026-03-24 03:58:44.036345 | orchestrator | + echo '# OpenStack endpoints' 2026-03-24 03:58:44.036356 | orchestrator | + echo 2026-03-24 03:58:44.036367 | orchestrator | + openstack endpoint list 2026-03-24 03:58:47.141682 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-24 03:58:47.141800 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-24 03:58:47.141813 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-24 03:58:47.141821 | orchestrator | | 07be59629892422ba5248b5d3f5dc563 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-24 03:58:47.141829 | orchestrator | | 0d5f4fb45cc6428db8ff59170b173f73 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-24 03:58:47.141836 | orchestrator | | 188410c714e44c60a1413a6d0565db3e | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-24 03:58:47.141843 | orchestrator | | 1f3ad43f3cb2448da4f59a8dd5613282 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-24 03:58:47.141851 | orchestrator | | 29afedc380e347d7acfd1d1d06734ab5 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-24 03:58:47.141859 | orchestrator | | 2c2ff52dbf254217a2dbbce57baeffab | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-24 03:58:47.141867 | orchestrator | | 30577bbd896c416d85ac0c8955a74e6e | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-24 03:58:47.141874 | orchestrator | | 5713dcec64944f50a14b9c850e6f8ebc | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-24 03:58:47.141882 | orchestrator | | 5b58da6f18e34701859e8d6043da8293 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-24 03:58:47.141889 | orchestrator | | 664b61f8b9ec4256bfbf65dd2ac7a63d | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-24 03:58:47.141913 | orchestrator | | 6fe326f1f2ad4fb2bb8a0385b50312da | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-24 03:58:47.141921 | orchestrator | | 7bb5ffc1974a4a17af4146f3b70e7fd4 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-24 03:58:47.141928 | orchestrator | | 7ce090b9907840e4b0195fdb739f8eec | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-24 03:58:47.141936 | orchestrator | | 8bf6222bf3b84ec2a80195737b3c9e4e | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-24 03:58:47.141944 | orchestrator | | 9b46c8a1499444d68b1c2d72cf89dcfe | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-24 03:58:47.141951 | orchestrator | | 9b5a383af1204f05b0986792a1a642ea | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-24 03:58:47.141959 | orchestrator | | 9e7aaef14b8946c18c6a61b530d57a28 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-24 03:58:47.141967 | orchestrator | | a8f650f6dc09410ca88949e8abf5f136 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-24 03:58:47.141982 | orchestrator | | ac4e28013150459c8b49cfdf849500f4 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-24 03:58:47.141989 | orchestrator | | b243ae56a6404172aebdfcf31f3c65d0 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-24 03:58:47.142055 | orchestrator | | b3c099c84f4842ada88d928464cc13b6 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-24 03:58:47.142066 | orchestrator | | bcf5e7bd6cfe4b38aaea73ede17cf5f9 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-24 03:58:47.142073 | orchestrator | | c08f12a0e57547d6a66aed32bf5f603a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-24 03:58:47.142080 | orchestrator | | c1c88d7502064183b9cd7820e733192c | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-24 03:58:47.142087 | orchestrator | | c5964e9b2f2e4e929c7df1dd3c1d3fb7 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-24 03:58:47.142095 | orchestrator | | c658c12bbc6e4254a6a98528cd80072e | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-24 03:58:47.142103 | orchestrator | | d12ec8d5f2df43509d8c38a0c1807f77 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-24 03:58:47.142110 | orchestrator | | e3d848559a0a44e3ae824ed55c980475 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-24 03:58:47.142118 | orchestrator | | e7db216ad83a4da88a686e7bb0012c9f | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-24 03:58:47.142125 | orchestrator | | f669d4922fa64aca8be051b9ee589efb | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-24 03:58:47.142134 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-24 03:58:47.348121 | orchestrator | 2026-03-24 03:58:47.348187 | orchestrator | # Cinder 2026-03-24 03:58:47.348193 | orchestrator | 2026-03-24 03:58:47.348197 | orchestrator | + echo 2026-03-24 03:58:47.348202 | orchestrator | + echo '# Cinder' 2026-03-24 03:58:47.348206 | orchestrator | + echo 2026-03-24 03:58:47.348210 | orchestrator | + openstack volume service list 2026-03-24 03:58:49.880683 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-24 03:58:49.880798 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-24 03:58:49.880813 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-24 03:58:49.880825 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-24T03:58:44.000000 | 2026-03-24 03:58:49.880836 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-24T03:58:44.000000 | 2026-03-24 03:58:49.880847 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-24T03:58:44.000000 | 2026-03-24 03:58:49.880858 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-24T03:58:44.000000 | 2026-03-24 03:58:49.880869 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-24T03:58:40.000000 | 2026-03-24 03:58:49.880880 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-24T03:58:40.000000 | 2026-03-24 03:58:49.880917 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-24T03:58:46.000000 | 2026-03-24 03:58:49.880929 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-24T03:58:48.000000 | 2026-03-24 03:58:49.880940 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-24T03:58:48.000000 | 2026-03-24 03:58:49.880951 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-24 03:58:50.096489 | orchestrator | 2026-03-24 03:58:50.096574 | orchestrator | # Neutron 2026-03-24 03:58:50.096583 | orchestrator | 2026-03-24 03:58:50.096590 | orchestrator | + echo 2026-03-24 03:58:50.096597 | orchestrator | + echo '# Neutron' 2026-03-24 03:58:50.096604 | orchestrator | + echo 2026-03-24 03:58:50.096610 | orchestrator | + openstack network agent list 2026-03-24 03:58:52.667821 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-24 03:58:52.667900 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-24 03:58:52.667906 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-24 03:58:52.667910 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-24 03:58:52.667928 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-24 03:58:52.667934 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-24 03:58:52.667939 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-24 03:58:52.667947 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-24 03:58:52.667957 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-24 03:58:52.667963 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-24 03:58:52.667969 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-24 03:58:52.667975 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-24 03:58:52.667981 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-24 03:58:52.896969 | orchestrator | + openstack network service provider list 2026-03-24 03:58:55.311756 | orchestrator | +---------------+------+---------+ 2026-03-24 03:58:55.311857 | orchestrator | | Service Type | Name | Default | 2026-03-24 03:58:55.311864 | orchestrator | +---------------+------+---------+ 2026-03-24 03:58:55.311870 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-24 03:58:55.311877 | orchestrator | +---------------+------+---------+ 2026-03-24 03:58:55.567415 | orchestrator | 2026-03-24 03:58:55.567481 | orchestrator | # Nova 2026-03-24 03:58:55.567487 | orchestrator | 2026-03-24 03:58:55.567492 | orchestrator | + echo 2026-03-24 03:58:55.567497 | orchestrator | + echo '# Nova' 2026-03-24 03:58:55.567502 | orchestrator | + echo 2026-03-24 03:58:55.567507 | orchestrator | + openstack compute service list 2026-03-24 03:58:58.886306 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-24 03:58:58.886415 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-24 03:58:58.886424 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-24 03:58:58.886428 | orchestrator | | 25c7cd17-5ebc-4503-9ddc-e0fa87dc1454 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-24T03:58:54.000000 | 2026-03-24 03:58:58.886432 | orchestrator | | 97277df1-f0bb-496b-95cf-874378226a34 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-24T03:58:49.000000 | 2026-03-24 03:58:58.886436 | orchestrator | | 05eb847b-62bb-476a-b2f5-3f203350f67d | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-24T03:58:49.000000 | 2026-03-24 03:58:58.886474 | orchestrator | | d17d8941-6e76-4bc9-97fa-86cb2bed6919 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-24T03:58:50.000000 | 2026-03-24 03:58:58.886482 | orchestrator | | 86e2f4b2-1dc0-4ed1-a61d-12c1537bf559 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-24T03:58:52.000000 | 2026-03-24 03:58:58.886490 | orchestrator | | 2cc49342-de0d-4bf4-aada-a698cd7f3661 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-24T03:58:53.000000 | 2026-03-24 03:58:58.886499 | orchestrator | | 2a3d2cec-bef4-4976-aeda-61de3437b603 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-24T03:58:50.000000 | 2026-03-24 03:58:58.886507 | orchestrator | | 2c2a4e24-f936-44bd-a724-4ab3fd3df9c3 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-24T03:58:50.000000 | 2026-03-24 03:58:58.886513 | orchestrator | | f4efa84e-cbdf-4aaf-8005-7cac96b3c3b8 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-24T03:58:50.000000 | 2026-03-24 03:58:58.886519 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-24 03:58:59.119598 | orchestrator | + openstack hypervisor list 2026-03-24 03:59:01.634464 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-24 03:59:01.634565 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-24 03:59:01.634578 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-24 03:59:01.634589 | orchestrator | | e7ebbba2-d480-4a16-8375-e2240101949c | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-24 03:59:01.634598 | orchestrator | | e2910f4e-252d-4555-a35e-e127125079fa | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-24 03:59:01.634611 | orchestrator | | 43629206-a2cd-432b-8be7-d3043b50a16f | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-24 03:59:01.634620 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-24 03:59:01.851143 | orchestrator | 2026-03-24 03:59:01.851226 | orchestrator | # Run OpenStack test play 2026-03-24 03:59:01.851305 | orchestrator | 2026-03-24 03:59:01.851316 | orchestrator | + echo 2026-03-24 03:59:01.851339 | orchestrator | + echo '# Run OpenStack test play' 2026-03-24 03:59:01.851348 | orchestrator | + echo 2026-03-24 03:59:01.851355 | orchestrator | + osism apply --environment openstack test 2026-03-24 03:59:03.765330 | orchestrator | 2026-03-24 03:59:03 | INFO  | Trying to run play test in environment openstack 2026-03-24 03:59:03.841214 | orchestrator | 2026-03-24 03:59:03 | INFO  | Task a8a4acb8-3b49-447a-8465-8d6f94e5a2b6 (test) was prepared for execution. 2026-03-24 03:59:03.841315 | orchestrator | 2026-03-24 03:59:03 | INFO  | It takes a moment until task a8a4acb8-3b49-447a-8465-8d6f94e5a2b6 (test) has been started and output is visible here. 2026-03-24 04:01:32.707111 | orchestrator | 2026-03-24 04:01:32.707190 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-24 04:01:32.707197 | orchestrator | 2026-03-24 04:01:32.707202 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-24 04:01:32.707223 | orchestrator | Tuesday 24 March 2026 03:59:07 +0000 (0:00:00.077) 0:00:00.077 ********* 2026-03-24 04:01:32.707227 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707232 | orchestrator | 2026-03-24 04:01:32.707236 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-24 04:01:32.707240 | orchestrator | Tuesday 24 March 2026 03:59:11 +0000 (0:00:03.521) 0:00:03.598 ********* 2026-03-24 04:01:32.707244 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707247 | orchestrator | 2026-03-24 04:01:32.707251 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-24 04:01:32.707255 | orchestrator | Tuesday 24 March 2026 03:59:15 +0000 (0:00:04.072) 0:00:07.671 ********* 2026-03-24 04:01:32.707259 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707263 | orchestrator | 2026-03-24 04:01:32.707266 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-24 04:01:32.707270 | orchestrator | Tuesday 24 March 2026 03:59:21 +0000 (0:00:06.327) 0:00:13.998 ********* 2026-03-24 04:01:32.707274 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707278 | orchestrator | 2026-03-24 04:01:32.707281 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-24 04:01:32.707286 | orchestrator | Tuesday 24 March 2026 03:59:25 +0000 (0:00:03.802) 0:00:17.801 ********* 2026-03-24 04:01:32.707289 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707293 | orchestrator | 2026-03-24 04:01:32.707297 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-24 04:01:32.707301 | orchestrator | Tuesday 24 March 2026 03:59:29 +0000 (0:00:04.129) 0:00:21.930 ********* 2026-03-24 04:01:32.707304 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-24 04:01:32.707309 | orchestrator | changed: [localhost] => (item=member) 2026-03-24 04:01:32.707313 | orchestrator | changed: [localhost] => (item=creator) 2026-03-24 04:01:32.707317 | orchestrator | 2026-03-24 04:01:32.707321 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-24 04:01:32.707325 | orchestrator | Tuesday 24 March 2026 03:59:40 +0000 (0:00:10.884) 0:00:32.815 ********* 2026-03-24 04:01:32.707329 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707332 | orchestrator | 2026-03-24 04:01:32.707336 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-24 04:01:32.707340 | orchestrator | Tuesday 24 March 2026 03:59:45 +0000 (0:00:04.918) 0:00:37.733 ********* 2026-03-24 04:01:32.707344 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707348 | orchestrator | 2026-03-24 04:01:32.707351 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-24 04:01:32.707355 | orchestrator | Tuesday 24 March 2026 03:59:50 +0000 (0:00:04.470) 0:00:42.204 ********* 2026-03-24 04:01:32.707359 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707363 | orchestrator | 2026-03-24 04:01:32.707366 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-24 04:01:32.707370 | orchestrator | Tuesday 24 March 2026 03:59:54 +0000 (0:00:04.000) 0:00:46.205 ********* 2026-03-24 04:01:32.707374 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707378 | orchestrator | 2026-03-24 04:01:32.707382 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-24 04:01:32.707385 | orchestrator | Tuesday 24 March 2026 03:59:57 +0000 (0:00:03.867) 0:00:50.073 ********* 2026-03-24 04:01:32.707389 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707393 | orchestrator | 2026-03-24 04:01:32.707397 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-24 04:01:32.707400 | orchestrator | Tuesday 24 March 2026 04:00:01 +0000 (0:00:03.841) 0:00:53.914 ********* 2026-03-24 04:01:32.707404 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707408 | orchestrator | 2026-03-24 04:01:32.707412 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-24 04:01:32.707416 | orchestrator | Tuesday 24 March 2026 04:00:05 +0000 (0:00:03.625) 0:00:57.540 ********* 2026-03-24 04:01:32.707420 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707427 | orchestrator | 2026-03-24 04:01:32.707431 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-24 04:01:32.707435 | orchestrator | Tuesday 24 March 2026 04:00:09 +0000 (0:00:04.448) 0:01:01.988 ********* 2026-03-24 04:01:32.707439 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707443 | orchestrator | 2026-03-24 04:01:32.707446 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-24 04:01:32.707450 | orchestrator | Tuesday 24 March 2026 04:00:14 +0000 (0:00:05.145) 0:01:07.134 ********* 2026-03-24 04:01:32.707493 | orchestrator | changed: [localhost] 2026-03-24 04:01:32.707502 | orchestrator | 2026-03-24 04:01:32.707506 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-24 04:01:32.707510 | orchestrator | 2026-03-24 04:01:32.707514 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-24 04:01:32.707517 | orchestrator | Tuesday 24 March 2026 04:00:25 +0000 (0:00:10.944) 0:01:18.079 ********* 2026-03-24 04:01:32.707522 | orchestrator | ok: [localhost] 2026-03-24 04:01:32.707526 | orchestrator | 2026-03-24 04:01:32.707529 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-24 04:01:32.707533 | orchestrator | Tuesday 24 March 2026 04:00:29 +0000 (0:00:03.462) 0:01:21.541 ********* 2026-03-24 04:01:32.707537 | orchestrator | skipping: [localhost] 2026-03-24 04:01:32.707541 | orchestrator | 2026-03-24 04:01:32.707555 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-24 04:01:32.707559 | orchestrator | Tuesday 24 March 2026 04:00:29 +0000 (0:00:00.035) 0:01:21.577 ********* 2026-03-24 04:01:32.707563 | orchestrator | skipping: [localhost] 2026-03-24 04:01:32.707567 | orchestrator | 2026-03-24 04:01:32.707570 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-24 04:01:32.707574 | orchestrator | Tuesday 24 March 2026 04:00:29 +0000 (0:00:00.033) 0:01:21.611 ********* 2026-03-24 04:01:32.707583 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-24 04:01:32.707587 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-24 04:01:32.707600 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-24 04:01:32.707604 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-24 04:01:32.707608 | orchestrator | skipping: [localhost] => (item=test)  2026-03-24 04:01:32.707611 | orchestrator | skipping: [localhost] 2026-03-24 04:01:32.707615 | orchestrator | 2026-03-24 04:01:32.707619 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-24 04:01:32.707623 | orchestrator | Tuesday 24 March 2026 04:00:29 +0000 (0:00:00.155) 0:01:21.766 ********* 2026-03-24 04:01:32.707627 | orchestrator | skipping: [localhost] 2026-03-24 04:01:32.707630 | orchestrator | 2026-03-24 04:01:32.707634 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-24 04:01:32.707638 | orchestrator | Tuesday 24 March 2026 04:00:29 +0000 (0:00:00.148) 0:01:21.915 ********* 2026-03-24 04:01:32.707643 | orchestrator | changed: [localhost] => (item=test) 2026-03-24 04:01:32.707648 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-24 04:01:32.707652 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-24 04:01:32.707657 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-24 04:01:32.707661 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-24 04:01:32.707665 | orchestrator | 2026-03-24 04:01:32.707670 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-24 04:01:32.707674 | orchestrator | Tuesday 24 March 2026 04:00:33 +0000 (0:00:04.250) 0:01:26.165 ********* 2026-03-24 04:01:32.707679 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-24 04:01:32.707684 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-24 04:01:32.707689 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-24 04:01:32.707693 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-24 04:01:32.707703 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j294466781863.3656', 'results_file': '/ansible/.ansible_async/j294466781863.3656', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707709 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j179028688198.3681', 'results_file': '/ansible/.ansible_async/j179028688198.3681', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707714 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j348720290850.3706', 'results_file': '/ansible/.ansible_async/j348720290850.3706', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707719 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j785472603822.3731', 'results_file': '/ansible/.ansible_async/j785472603822.3731', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707724 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j29401026686.3756', 'results_file': '/ansible/.ansible_async/j29401026686.3756', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707728 | orchestrator | 2026-03-24 04:01:32.707733 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-24 04:01:32.707737 | orchestrator | Tuesday 24 March 2026 04:01:19 +0000 (0:00:45.833) 0:02:11.999 ********* 2026-03-24 04:01:32.707742 | orchestrator | changed: [localhost] => (item=test) 2026-03-24 04:01:32.707747 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-24 04:01:32.707751 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-24 04:01:32.707756 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-24 04:01:32.707760 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-24 04:01:32.707764 | orchestrator | 2026-03-24 04:01:32.707769 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-24 04:01:32.707773 | orchestrator | Tuesday 24 March 2026 04:01:23 +0000 (0:00:03.825) 0:02:15.824 ********* 2026-03-24 04:01:32.707778 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-24 04:01:32.707783 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j106308434418.3852', 'results_file': '/ansible/.ansible_async/j106308434418.3852', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707788 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j24469863235.3877', 'results_file': '/ansible/.ansible_async/j24469863235.3877', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707793 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j421462760789.3902', 'results_file': '/ansible/.ansible_async/j421462760789.3902', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-24 04:01:32.707800 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j935931630656.3927', 'results_file': '/ansible/.ansible_async/j935931630656.3927', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-24 04:02:11.483983 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j566747718370.3952', 'results_file': '/ansible/.ansible_async/j566747718370.3952', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-24 04:02:11.484090 | orchestrator | 2026-03-24 04:02:11.484112 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-24 04:02:11.484122 | orchestrator | Tuesday 24 March 2026 04:01:32 +0000 (0:00:09.043) 0:02:24.867 ********* 2026-03-24 04:02:11.484129 | orchestrator | changed: [localhost] => (item=test) 2026-03-24 04:02:11.484158 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-24 04:02:11.484166 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-24 04:02:11.484173 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-24 04:02:11.484179 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-24 04:02:11.484187 | orchestrator | 2026-03-24 04:02:11.484194 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-24 04:02:11.484201 | orchestrator | Tuesday 24 March 2026 04:01:37 +0000 (0:00:04.652) 0:02:29.519 ********* 2026-03-24 04:02:11.484208 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-24 04:02:11.484216 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j546816014943.4027', 'results_file': '/ansible/.ansible_async/j546816014943.4027', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-24 04:02:11.484223 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j242573501984.4052', 'results_file': '/ansible/.ansible_async/j242573501984.4052', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-24 04:02:11.484230 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j243743140172.4078', 'results_file': '/ansible/.ansible_async/j243743140172.4078', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-24 04:02:11.484251 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j874251370190.4104', 'results_file': '/ansible/.ansible_async/j874251370190.4104', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-24 04:02:11.484259 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j635020031079.4130', 'results_file': '/ansible/.ansible_async/j635020031079.4130', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-24 04:02:11.484265 | orchestrator | 2026-03-24 04:02:11.484272 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-24 04:02:11.484279 | orchestrator | Tuesday 24 March 2026 04:01:46 +0000 (0:00:09.276) 0:02:38.796 ********* 2026-03-24 04:02:11.484286 | orchestrator | changed: [localhost] 2026-03-24 04:02:11.484293 | orchestrator | 2026-03-24 04:02:11.484300 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-24 04:02:11.484307 | orchestrator | Tuesday 24 March 2026 04:01:52 +0000 (0:00:06.351) 0:02:45.148 ********* 2026-03-24 04:02:11.484313 | orchestrator | changed: [localhost] 2026-03-24 04:02:11.484320 | orchestrator | 2026-03-24 04:02:11.484327 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-24 04:02:11.484333 | orchestrator | Tuesday 24 March 2026 04:02:06 +0000 (0:00:13.278) 0:02:58.426 ********* 2026-03-24 04:02:11.484340 | orchestrator | ok: [localhost] 2026-03-24 04:02:11.484347 | orchestrator | 2026-03-24 04:02:11.484354 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-24 04:02:11.484361 | orchestrator | Tuesday 24 March 2026 04:02:11 +0000 (0:00:04.946) 0:03:03.373 ********* 2026-03-24 04:02:11.484368 | orchestrator | ok: [localhost] => { 2026-03-24 04:02:11.484375 | orchestrator |  "msg": "192.168.112.109" 2026-03-24 04:02:11.484382 | orchestrator | } 2026-03-24 04:02:11.484389 | orchestrator | 2026-03-24 04:02:11.484396 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:02:11.484403 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 04:02:11.484411 | orchestrator | 2026-03-24 04:02:11.484418 | orchestrator | 2026-03-24 04:02:11.484425 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:02:11.484431 | orchestrator | Tuesday 24 March 2026 04:02:11 +0000 (0:00:00.046) 0:03:03.419 ********* 2026-03-24 04:02:11.484438 | orchestrator | =============================================================================== 2026-03-24 04:02:11.484455 | orchestrator | Wait for instance creation to complete --------------------------------- 45.83s 2026-03-24 04:02:11.484462 | orchestrator | Attach test volume ----------------------------------------------------- 13.28s 2026-03-24 04:02:11.484473 | orchestrator | Create test router ----------------------------------------------------- 10.94s 2026-03-24 04:02:11.484484 | orchestrator | Add member roles to user test ------------------------------------------ 10.88s 2026-03-24 04:02:11.484496 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.28s 2026-03-24 04:02:11.484514 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.04s 2026-03-24 04:02:11.484526 | orchestrator | Create test volume ------------------------------------------------------ 6.35s 2026-03-24 04:02:11.484555 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.33s 2026-03-24 04:02:11.484566 | orchestrator | Create test subnet ------------------------------------------------------ 5.15s 2026-03-24 04:02:11.484576 | orchestrator | Create floating ip address ---------------------------------------------- 4.95s 2026-03-24 04:02:11.484587 | orchestrator | Create test server group ------------------------------------------------ 4.92s 2026-03-24 04:02:11.484598 | orchestrator | Add tag to instances ---------------------------------------------------- 4.65s 2026-03-24 04:02:11.484609 | orchestrator | Create ssh security group ----------------------------------------------- 4.47s 2026-03-24 04:02:11.484642 | orchestrator | Create test network ----------------------------------------------------- 4.45s 2026-03-24 04:02:11.484654 | orchestrator | Create test instances --------------------------------------------------- 4.25s 2026-03-24 04:02:11.484665 | orchestrator | Create test user -------------------------------------------------------- 4.13s 2026-03-24 04:02:11.484675 | orchestrator | Create test-admin user -------------------------------------------------- 4.07s 2026-03-24 04:02:11.484686 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.00s 2026-03-24 04:02:11.484697 | orchestrator | Create icmp security group ---------------------------------------------- 3.87s 2026-03-24 04:02:11.484708 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.84s 2026-03-24 04:02:11.745140 | orchestrator | + server_list 2026-03-24 04:02:11.745228 | orchestrator | + openstack --os-cloud test server list 2026-03-24 04:02:15.438906 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-24 04:02:15.439002 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-24 04:02:15.439013 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-24 04:02:15.439019 | orchestrator | | 8419c3b7-6af8-483b-8a22-5f7a9dfa8f50 | test-3 | ACTIVE | test=192.168.112.135, 192.168.200.136 | N/A (booted from volume) | SCS-1L-1 | 2026-03-24 04:02:15.439025 | orchestrator | | d173d3e9-3632-4cc5-b9ae-30e25e091850 | test-4 | ACTIVE | test=192.168.112.107, 192.168.200.252 | N/A (booted from volume) | SCS-1L-1 | 2026-03-24 04:02:15.439032 | orchestrator | | d63365db-e450-41cb-a3ec-c4e527e62fec | test-2 | ACTIVE | test=192.168.112.128, 192.168.200.228 | N/A (booted from volume) | SCS-1L-1 | 2026-03-24 04:02:15.439038 | orchestrator | | 2c79ea91-3651-41ba-9735-86aa22bb0678 | test-1 | ACTIVE | test=192.168.112.114, 192.168.200.93 | N/A (booted from volume) | SCS-1L-1 | 2026-03-24 04:02:15.439044 | orchestrator | | 18e0e59f-0cd5-4dff-8766-43da8b12a142 | test | ACTIVE | test=192.168.112.109, 192.168.200.170 | N/A (booted from volume) | SCS-1L-1 | 2026-03-24 04:02:15.439050 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-03-24 04:02:15.679060 | orchestrator | + openstack --os-cloud test server show test 2026-03-24 04:02:18.667873 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:18.667973 | orchestrator | | Field | Value | 2026-03-24 04:02:18.667985 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:18.667990 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-24 04:02:18.667995 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-24 04:02:18.668001 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-24 04:02:18.668005 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-24 04:02:18.668010 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-24 04:02:18.668015 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-24 04:02:18.668030 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-24 04:02:18.668039 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-24 04:02:18.668043 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-24 04:02:18.668050 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-24 04:02:18.668054 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-24 04:02:18.668058 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-24 04:02:18.668062 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-24 04:02:18.668066 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-24 04:02:18.668070 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-24 04:02:18.668074 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-24T04:01:04.000000 | 2026-03-24 04:02:18.668108 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-24 04:02:18.668113 | orchestrator | | accessIPv4 | | 2026-03-24 04:02:18.668117 | orchestrator | | accessIPv6 | | 2026-03-24 04:02:18.668123 | orchestrator | | addresses | test=192.168.112.109, 192.168.200.170 | 2026-03-24 04:02:18.668127 | orchestrator | | config_drive | | 2026-03-24 04:02:18.668131 | orchestrator | | created | 2026-03-24T04:00:38Z | 2026-03-24 04:02:18.668135 | orchestrator | | description | None | 2026-03-24 04:02:18.668139 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-24 04:02:18.668143 | orchestrator | | hostId | cfda4ab99df722295bffb36b083d98a1ebbb7f4eb38686046c2dbe51 | 2026-03-24 04:02:18.668147 | orchestrator | | host_status | None | 2026-03-24 04:02:18.668158 | orchestrator | | id | 18e0e59f-0cd5-4dff-8766-43da8b12a142 | 2026-03-24 04:02:18.668162 | orchestrator | | image | N/A (booted from volume) | 2026-03-24 04:02:18.668166 | orchestrator | | key_name | test | 2026-03-24 04:02:18.668173 | orchestrator | | locked | False | 2026-03-24 04:02:18.668177 | orchestrator | | locked_reason | None | 2026-03-24 04:02:18.668180 | orchestrator | | name | test | 2026-03-24 04:02:18.668184 | orchestrator | | pinned_availability_zone | None | 2026-03-24 04:02:18.668188 | orchestrator | | progress | 0 | 2026-03-24 04:02:18.668192 | orchestrator | | project_id | 9f4d54e490b546f994a70fd86bc9f0c9 | 2026-03-24 04:02:18.668216 | orchestrator | | properties | hostname='test' | 2026-03-24 04:02:18.668233 | orchestrator | | security_groups | name='icmp' | 2026-03-24 04:02:18.668237 | orchestrator | | | name='ssh' | 2026-03-24 04:02:18.668247 | orchestrator | | server_groups | None | 2026-03-24 04:02:18.668251 | orchestrator | | status | ACTIVE | 2026-03-24 04:02:18.668256 | orchestrator | | tags | test | 2026-03-24 04:02:18.668266 | orchestrator | | trusted_image_certificates | None | 2026-03-24 04:02:18.668273 | orchestrator | | updated | 2026-03-24T04:01:25Z | 2026-03-24 04:02:18.668280 | orchestrator | | user_id | 8869fc7f800b4e67b37e8e78b705b2da | 2026-03-24 04:02:18.668290 | orchestrator | | volumes_attached | delete_on_termination='True', id='b2d0a39a-01dd-49bb-9516-2d8c22cb0ffe' | 2026-03-24 04:02:18.668297 | orchestrator | | | delete_on_termination='False', id='62b5de21-29c6-4bf9-91ca-618a03428414' | 2026-03-24 04:02:18.672186 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:18.894533 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-24 04:02:21.650557 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:21.650713 | orchestrator | | Field | Value | 2026-03-24 04:02:21.650752 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:21.650766 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-24 04:02:21.650776 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-24 04:02:21.650787 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-24 04:02:21.650818 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-24 04:02:21.650829 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-24 04:02:21.650839 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-24 04:02:21.650869 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-24 04:02:21.650881 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-24 04:02:21.650890 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-24 04:02:21.650904 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-24 04:02:21.650914 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-24 04:02:21.650924 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-24 04:02:21.650940 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-24 04:02:21.650949 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-24 04:02:21.650957 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-24 04:02:21.650967 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-24T04:01:04.000000 | 2026-03-24 04:02:21.650984 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-24 04:02:21.650995 | orchestrator | | accessIPv4 | | 2026-03-24 04:02:21.651009 | orchestrator | | accessIPv6 | | 2026-03-24 04:02:21.651018 | orchestrator | | addresses | test=192.168.112.114, 192.168.200.93 | 2026-03-24 04:02:21.651027 | orchestrator | | config_drive | | 2026-03-24 04:02:21.651038 | orchestrator | | created | 2026-03-24T04:00:39Z | 2026-03-24 04:02:21.651054 | orchestrator | | description | None | 2026-03-24 04:02:21.651064 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-24 04:02:21.651073 | orchestrator | | hostId | cfda4ab99df722295bffb36b083d98a1ebbb7f4eb38686046c2dbe51 | 2026-03-24 04:02:21.651084 | orchestrator | | host_status | None | 2026-03-24 04:02:21.651100 | orchestrator | | id | 2c79ea91-3651-41ba-9735-86aa22bb0678 | 2026-03-24 04:02:21.651111 | orchestrator | | image | N/A (booted from volume) | 2026-03-24 04:02:21.651124 | orchestrator | | key_name | test | 2026-03-24 04:02:21.651133 | orchestrator | | locked | False | 2026-03-24 04:02:21.651142 | orchestrator | | locked_reason | None | 2026-03-24 04:02:21.651158 | orchestrator | | name | test-1 | 2026-03-24 04:02:21.651168 | orchestrator | | pinned_availability_zone | None | 2026-03-24 04:02:21.651177 | orchestrator | | progress | 0 | 2026-03-24 04:02:21.651188 | orchestrator | | project_id | 9f4d54e490b546f994a70fd86bc9f0c9 | 2026-03-24 04:02:21.651198 | orchestrator | | properties | hostname='test-1' | 2026-03-24 04:02:21.651213 | orchestrator | | security_groups | name='icmp' | 2026-03-24 04:02:21.651223 | orchestrator | | | name='ssh' | 2026-03-24 04:02:21.651243 | orchestrator | | server_groups | None | 2026-03-24 04:02:21.651253 | orchestrator | | status | ACTIVE | 2026-03-24 04:02:21.651269 | orchestrator | | tags | test | 2026-03-24 04:02:21.651279 | orchestrator | | trusted_image_certificates | None | 2026-03-24 04:02:21.651289 | orchestrator | | updated | 2026-03-24T04:01:25Z | 2026-03-24 04:02:21.651299 | orchestrator | | user_id | 8869fc7f800b4e67b37e8e78b705b2da | 2026-03-24 04:02:21.651309 | orchestrator | | volumes_attached | delete_on_termination='True', id='536965c2-f666-4ed7-8db2-4bfb9bbd72af' | 2026-03-24 04:02:21.655442 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:21.874137 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-24 04:02:24.849157 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:24.849229 | orchestrator | | Field | Value | 2026-03-24 04:02:24.849238 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:24.849257 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-24 04:02:24.849263 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-24 04:02:24.849269 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-24 04:02:24.849274 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-24 04:02:24.849279 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-24 04:02:24.849290 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-24 04:02:24.849307 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-24 04:02:24.849313 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-24 04:02:24.849318 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-24 04:02:24.849330 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-24 04:02:24.849335 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-24 04:02:24.849341 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-24 04:02:24.849346 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-24 04:02:24.849351 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-24 04:02:24.849357 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-24 04:02:24.849362 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-24T04:01:05.000000 | 2026-03-24 04:02:24.849372 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-24 04:02:24.849377 | orchestrator | | accessIPv4 | | 2026-03-24 04:02:24.849383 | orchestrator | | accessIPv6 | | 2026-03-24 04:02:24.849394 | orchestrator | | addresses | test=192.168.112.128, 192.168.200.228 | 2026-03-24 04:02:24.849400 | orchestrator | | config_drive | | 2026-03-24 04:02:24.849405 | orchestrator | | created | 2026-03-24T04:00:40Z | 2026-03-24 04:02:24.849411 | orchestrator | | description | None | 2026-03-24 04:02:24.849416 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-24 04:02:24.849421 | orchestrator | | hostId | 49c81f20779c44d6b46f3548b5887ddec9710ab149b3e390af532e6f | 2026-03-24 04:02:24.849427 | orchestrator | | host_status | None | 2026-03-24 04:02:24.849436 | orchestrator | | id | d63365db-e450-41cb-a3ec-c4e527e62fec | 2026-03-24 04:02:24.849441 | orchestrator | | image | N/A (booted from volume) | 2026-03-24 04:02:24.849450 | orchestrator | | key_name | test | 2026-03-24 04:02:24.849457 | orchestrator | | locked | False | 2026-03-24 04:02:24.849463 | orchestrator | | locked_reason | None | 2026-03-24 04:02:24.849468 | orchestrator | | name | test-2 | 2026-03-24 04:02:24.849473 | orchestrator | | pinned_availability_zone | None | 2026-03-24 04:02:24.849479 | orchestrator | | progress | 0 | 2026-03-24 04:02:24.849484 | orchestrator | | project_id | 9f4d54e490b546f994a70fd86bc9f0c9 | 2026-03-24 04:02:24.849489 | orchestrator | | properties | hostname='test-2' | 2026-03-24 04:02:24.849498 | orchestrator | | security_groups | name='icmp' | 2026-03-24 04:02:24.849507 | orchestrator | | | name='ssh' | 2026-03-24 04:02:24.849513 | orchestrator | | server_groups | None | 2026-03-24 04:02:24.849520 | orchestrator | | status | ACTIVE | 2026-03-24 04:02:24.849525 | orchestrator | | tags | test | 2026-03-24 04:02:24.849531 | orchestrator | | trusted_image_certificates | None | 2026-03-24 04:02:24.849536 | orchestrator | | updated | 2026-03-24T04:01:26Z | 2026-03-24 04:02:24.849541 | orchestrator | | user_id | 8869fc7f800b4e67b37e8e78b705b2da | 2026-03-24 04:02:24.849547 | orchestrator | | volumes_attached | delete_on_termination='True', id='b057784c-9007-40e7-a6f2-b035d235906d' | 2026-03-24 04:02:24.852640 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:25.074947 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-24 04:02:27.956798 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:27.956935 | orchestrator | | Field | Value | 2026-03-24 04:02:27.956969 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:27.956981 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-24 04:02:27.956991 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-24 04:02:27.957002 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-24 04:02:27.957012 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-24 04:02:27.957022 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-24 04:02:27.957032 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-24 04:02:27.957080 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-24 04:02:27.957091 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-24 04:02:27.957102 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-24 04:02:27.957116 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-24 04:02:27.957127 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-24 04:02:27.957137 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-24 04:02:27.957147 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-24 04:02:27.957157 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-24 04:02:27.957167 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-24 04:02:27.957176 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-24T04:01:05.000000 | 2026-03-24 04:02:27.957199 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-24 04:02:27.957210 | orchestrator | | accessIPv4 | | 2026-03-24 04:02:27.957220 | orchestrator | | accessIPv6 | | 2026-03-24 04:02:27.957232 | orchestrator | | addresses | test=192.168.112.135, 192.168.200.136 | 2026-03-24 04:02:27.957244 | orchestrator | | config_drive | | 2026-03-24 04:02:27.957257 | orchestrator | | created | 2026-03-24T04:00:40Z | 2026-03-24 04:02:27.957269 | orchestrator | | description | None | 2026-03-24 04:02:27.957280 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-24 04:02:27.957292 | orchestrator | | hostId | 49c81f20779c44d6b46f3548b5887ddec9710ab149b3e390af532e6f | 2026-03-24 04:02:27.957317 | orchestrator | | host_status | None | 2026-03-24 04:02:27.957348 | orchestrator | | id | 8419c3b7-6af8-483b-8a22-5f7a9dfa8f50 | 2026-03-24 04:02:27.957771 | orchestrator | | image | N/A (booted from volume) | 2026-03-24 04:02:27.957790 | orchestrator | | key_name | test | 2026-03-24 04:02:27.957798 | orchestrator | | locked | False | 2026-03-24 04:02:27.957806 | orchestrator | | locked_reason | None | 2026-03-24 04:02:27.957815 | orchestrator | | name | test-3 | 2026-03-24 04:02:27.957823 | orchestrator | | pinned_availability_zone | None | 2026-03-24 04:02:27.957831 | orchestrator | | progress | 0 | 2026-03-24 04:02:27.957848 | orchestrator | | project_id | 9f4d54e490b546f994a70fd86bc9f0c9 | 2026-03-24 04:02:27.957857 | orchestrator | | properties | hostname='test-3' | 2026-03-24 04:02:27.957877 | orchestrator | | security_groups | name='icmp' | 2026-03-24 04:02:27.957887 | orchestrator | | | name='ssh' | 2026-03-24 04:02:27.957900 | orchestrator | | server_groups | None | 2026-03-24 04:02:27.957920 | orchestrator | | status | ACTIVE | 2026-03-24 04:02:27.957935 | orchestrator | | tags | test | 2026-03-24 04:02:27.957947 | orchestrator | | trusted_image_certificates | None | 2026-03-24 04:02:27.957960 | orchestrator | | updated | 2026-03-24T04:01:26Z | 2026-03-24 04:02:27.957980 | orchestrator | | user_id | 8869fc7f800b4e67b37e8e78b705b2da | 2026-03-24 04:02:27.957993 | orchestrator | | volumes_attached | delete_on_termination='True', id='fb847ab3-9361-468b-b279-a89a4d71cd8b' | 2026-03-24 04:02:27.960960 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:28.197711 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-24 04:02:31.040271 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:31.040379 | orchestrator | | Field | Value | 2026-03-24 04:02:31.040397 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:31.040409 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-24 04:02:31.040420 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-24 04:02:31.040431 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-24 04:02:31.040441 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-24 04:02:31.040473 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-24 04:02:31.040486 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-24 04:02:31.040515 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-24 04:02:31.040535 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-24 04:02:31.040546 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-24 04:02:31.040557 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-24 04:02:31.040567 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-24 04:02:31.040578 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-24 04:02:31.040588 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-24 04:02:31.040607 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-24 04:02:31.040619 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-24 04:02:31.040630 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-24T04:01:08.000000 | 2026-03-24 04:02:31.040647 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-24 04:02:31.040663 | orchestrator | | accessIPv4 | | 2026-03-24 04:02:31.040674 | orchestrator | | accessIPv6 | | 2026-03-24 04:02:31.040684 | orchestrator | | addresses | test=192.168.112.107, 192.168.200.252 | 2026-03-24 04:02:31.040742 | orchestrator | | config_drive | | 2026-03-24 04:02:31.040755 | orchestrator | | created | 2026-03-24T04:00:40Z | 2026-03-24 04:02:31.040773 | orchestrator | | description | None | 2026-03-24 04:02:31.040785 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-24 04:02:31.040796 | orchestrator | | hostId | 49c81f20779c44d6b46f3548b5887ddec9710ab149b3e390af532e6f | 2026-03-24 04:02:31.040807 | orchestrator | | host_status | None | 2026-03-24 04:02:31.040825 | orchestrator | | id | d173d3e9-3632-4cc5-b9ae-30e25e091850 | 2026-03-24 04:02:31.040841 | orchestrator | | image | N/A (booted from volume) | 2026-03-24 04:02:31.040852 | orchestrator | | key_name | test | 2026-03-24 04:02:31.040863 | orchestrator | | locked | False | 2026-03-24 04:02:31.040875 | orchestrator | | locked_reason | None | 2026-03-24 04:02:31.040892 | orchestrator | | name | test-4 | 2026-03-24 04:02:31.040902 | orchestrator | | pinned_availability_zone | None | 2026-03-24 04:02:31.040914 | orchestrator | | progress | 0 | 2026-03-24 04:02:31.040926 | orchestrator | | project_id | 9f4d54e490b546f994a70fd86bc9f0c9 | 2026-03-24 04:02:31.040937 | orchestrator | | properties | hostname='test-4' | 2026-03-24 04:02:31.040956 | orchestrator | | security_groups | name='icmp' | 2026-03-24 04:02:31.040973 | orchestrator | | | name='ssh' | 2026-03-24 04:02:31.040985 | orchestrator | | server_groups | None | 2026-03-24 04:02:31.040996 | orchestrator | | status | ACTIVE | 2026-03-24 04:02:31.041007 | orchestrator | | tags | test | 2026-03-24 04:02:31.041025 | orchestrator | | trusted_image_certificates | None | 2026-03-24 04:02:31.041036 | orchestrator | | updated | 2026-03-24T04:01:27Z | 2026-03-24 04:02:31.041047 | orchestrator | | user_id | 8869fc7f800b4e67b37e8e78b705b2da | 2026-03-24 04:02:31.041057 | orchestrator | | volumes_attached | delete_on_termination='True', id='379abc86-3818-496c-97a0-e2afee488959' | 2026-03-24 04:02:31.044190 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-24 04:02:31.292990 | orchestrator | + server_ping 2026-03-24 04:02:31.294612 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-24 04:02:31.294748 | orchestrator | ++ tr -d '\r' 2026-03-24 04:02:34.075904 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-24 04:02:34.075985 | orchestrator | + ping -c3 192.168.112.114 2026-03-24 04:02:34.088152 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2026-03-24 04:02:34.088246 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=4.57 ms 2026-03-24 04:02:35.087131 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.26 ms 2026-03-24 04:02:36.088343 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.59 ms 2026-03-24 04:02:36.088435 | orchestrator | 2026-03-24 04:02:36.088452 | orchestrator | --- 192.168.112.114 ping statistics --- 2026-03-24 04:02:36.088463 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-24 04:02:36.088474 | orchestrator | rtt min/avg/max/mdev = 1.587/2.803/4.568/1.277 ms 2026-03-24 04:02:36.088485 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-24 04:02:36.088496 | orchestrator | + ping -c3 192.168.112.109 2026-03-24 04:02:36.102995 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2026-03-24 04:02:36.103078 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=10.2 ms 2026-03-24 04:02:37.096840 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.51 ms 2026-03-24 04:02:38.098504 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.84 ms 2026-03-24 04:02:38.098599 | orchestrator | 2026-03-24 04:02:38.098613 | orchestrator | --- 192.168.112.109 ping statistics --- 2026-03-24 04:02:38.098624 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-24 04:02:38.098633 | orchestrator | rtt min/avg/max/mdev = 1.841/4.846/10.190/3.788 ms 2026-03-24 04:02:38.098687 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-24 04:02:38.098699 | orchestrator | + ping -c3 192.168.112.135 2026-03-24 04:02:38.108453 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2026-03-24 04:02:38.108548 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=5.00 ms 2026-03-24 04:02:39.107033 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.34 ms 2026-03-24 04:02:40.107446 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.77 ms 2026-03-24 04:02:40.107523 | orchestrator | 2026-03-24 04:02:40.107532 | orchestrator | --- 192.168.112.135 ping statistics --- 2026-03-24 04:02:40.107537 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-24 04:02:40.107543 | orchestrator | rtt min/avg/max/mdev = 1.774/3.035/4.995/1.404 ms 2026-03-24 04:02:40.108286 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-24 04:02:40.108308 | orchestrator | + ping -c3 192.168.112.128 2026-03-24 04:02:40.119709 | orchestrator | PING 192.168.112.128 (192.168.112.128) 56(84) bytes of data. 2026-03-24 04:02:40.119871 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=1 ttl=63 time=7.38 ms 2026-03-24 04:02:41.116170 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=2 ttl=63 time=2.22 ms 2026-03-24 04:02:42.117794 | orchestrator | 64 bytes from 192.168.112.128: icmp_seq=3 ttl=63 time=1.53 ms 2026-03-24 04:02:42.117869 | orchestrator | 2026-03-24 04:02:42.117878 | orchestrator | --- 192.168.112.128 ping statistics --- 2026-03-24 04:02:42.117886 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-24 04:02:42.117893 | orchestrator | rtt min/avg/max/mdev = 1.529/3.707/7.376/2.609 ms 2026-03-24 04:02:42.117901 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-24 04:02:42.117908 | orchestrator | + ping -c3 192.168.112.107 2026-03-24 04:02:42.131022 | orchestrator | PING 192.168.112.107 (192.168.112.107) 56(84) bytes of data. 2026-03-24 04:02:42.131114 | orchestrator | 64 bytes from 192.168.112.107: icmp_seq=1 ttl=63 time=7.89 ms 2026-03-24 04:02:43.126141 | orchestrator | 64 bytes from 192.168.112.107: icmp_seq=2 ttl=63 time=2.37 ms 2026-03-24 04:02:44.127081 | orchestrator | 64 bytes from 192.168.112.107: icmp_seq=3 ttl=63 time=1.77 ms 2026-03-24 04:02:44.127174 | orchestrator | 2026-03-24 04:02:44.127186 | orchestrator | --- 192.168.112.107 ping statistics --- 2026-03-24 04:02:44.127195 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-03-24 04:02:44.127203 | orchestrator | rtt min/avg/max/mdev = 1.771/4.008/7.888/2.753 ms 2026-03-24 04:02:44.127613 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-24 04:02:44.231578 | orchestrator | ok: Runtime: 0:09:39.560364 2026-03-24 04:02:44.276508 | 2026-03-24 04:02:44.276653 | TASK [Run tempest] 2026-03-24 04:02:44.812174 | orchestrator | skipping: Conditional result was False 2026-03-24 04:02:44.830238 | 2026-03-24 04:02:44.830413 | TASK [Check prometheus alert status] 2026-03-24 04:02:45.366497 | orchestrator | skipping: Conditional result was False 2026-03-24 04:02:45.373824 | 2026-03-24 04:02:45.373941 | PLAY [Upgrade testbed] 2026-03-24 04:02:45.382807 | 2026-03-24 04:02:45.383200 | TASK [Print next ceph version] 2026-03-24 04:02:45.493169 | orchestrator | ok 2026-03-24 04:02:45.503842 | 2026-03-24 04:02:45.503983 | TASK [Print next openstack version] 2026-03-24 04:02:45.582792 | orchestrator | ok 2026-03-24 04:02:45.594944 | 2026-03-24 04:02:45.595081 | TASK [Print next manager version] 2026-03-24 04:02:45.667418 | orchestrator | ok 2026-03-24 04:02:45.681000 | 2026-03-24 04:02:45.681233 | TASK [Set cloud fact (Zuul deployment)] 2026-03-24 04:02:45.749673 | orchestrator | ok 2026-03-24 04:02:45.770530 | 2026-03-24 04:02:45.770951 | TASK [Set cloud fact (local deployment)] 2026-03-24 04:02:45.810497 | orchestrator | skipping: Conditional result was False 2026-03-24 04:02:45.825151 | 2026-03-24 04:02:45.825343 | TASK [Fetch manager address] 2026-03-24 04:02:46.130937 | orchestrator | ok 2026-03-24 04:02:46.141130 | 2026-03-24 04:02:46.141254 | TASK [Set manager_host address] 2026-03-24 04:02:46.221081 | orchestrator | ok 2026-03-24 04:02:46.232619 | 2026-03-24 04:02:46.232751 | TASK [Run upgrade] 2026-03-24 04:02:46.943807 | orchestrator | + set -e 2026-03-24 04:02:46.943998 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-24 04:02:46.944010 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-24 04:02:46.944020 | orchestrator | + CEPH_VERSION=reef 2026-03-24 04:02:46.944026 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-24 04:02:46.944031 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-24 04:02:46.944041 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-03-24 04:02:46.953308 | orchestrator | + set -e 2026-03-24 04:02:46.953359 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 04:02:46.953366 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 04:02:46.953376 | orchestrator | ++ INTERACTIVE=false 2026-03-24 04:02:46.953381 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 04:02:46.953390 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 04:02:46.954787 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-24 04:02:46.996516 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-24 04:02:46.997469 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-24 04:02:47.040683 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-24 04:02:47.040796 | orchestrator | 2026-03-24 04:02:47.040809 | orchestrator | # UPGRADE MANAGER 2026-03-24 04:02:47.040814 | orchestrator | 2026-03-24 04:02:47.040818 | orchestrator | + echo 2026-03-24 04:02:47.040823 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-24 04:02:47.040829 | orchestrator | + echo 2026-03-24 04:02:47.040833 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-24 04:02:47.040838 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-24 04:02:47.040842 | orchestrator | + CEPH_VERSION=reef 2026-03-24 04:02:47.040846 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-24 04:02:47.040851 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-24 04:02:47.040855 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-03-24 04:02:47.048422 | orchestrator | + set -e 2026-03-24 04:02:47.048513 | orchestrator | + VERSION=10.0.0-rc.1 2026-03-24 04:02:47.048523 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-24 04:02:47.053826 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-03-24 04:02:47.053912 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-24 04:02:47.057927 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-24 04:02:47.061350 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-24 04:02:47.069327 | orchestrator | /opt/configuration ~ 2026-03-24 04:02:47.069406 | orchestrator | + set -e 2026-03-24 04:02:47.069413 | orchestrator | + pushd /opt/configuration 2026-03-24 04:02:47.069419 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 04:02:47.069425 | orchestrator | + source /opt/venv/bin/activate 2026-03-24 04:02:47.070365 | orchestrator | ++ deactivate nondestructive 2026-03-24 04:02:47.070450 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:47.070460 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:47.070467 | orchestrator | ++ hash -r 2026-03-24 04:02:47.070474 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:47.070489 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-24 04:02:47.070507 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-24 04:02:47.070512 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-24 04:02:47.070517 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-24 04:02:47.070522 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-24 04:02:47.070531 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-24 04:02:47.070535 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-24 04:02:47.070540 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 04:02:47.070547 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 04:02:47.070574 | orchestrator | ++ export PATH 2026-03-24 04:02:47.070580 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:47.070704 | orchestrator | ++ '[' -z '' ']' 2026-03-24 04:02:47.070711 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-24 04:02:47.070736 | orchestrator | ++ PS1='(venv) ' 2026-03-24 04:02:47.070741 | orchestrator | ++ export PS1 2026-03-24 04:02:47.070745 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-24 04:02:47.070827 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-24 04:02:47.070833 | orchestrator | ++ hash -r 2026-03-24 04:02:47.070877 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-24 04:02:47.973670 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-24 04:02:47.973886 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-24 04:02:47.975294 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-24 04:02:47.976632 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-24 04:02:47.977849 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-24 04:02:47.988228 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-24 04:02:47.989756 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-24 04:02:47.990910 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-24 04:02:47.992604 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-24 04:02:48.025538 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-24 04:02:48.027343 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-24 04:02:48.029163 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-24 04:02:48.030631 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-24 04:02:48.035368 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-24 04:02:48.243522 | orchestrator | ++ which gilt 2026-03-24 04:02:48.244539 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-24 04:02:48.244578 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-24 04:02:48.517439 | orchestrator | osism.cfg-generics: 2026-03-24 04:02:48.617806 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-24 04:02:48.618510 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-24 04:02:48.619675 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-24 04:02:48.619808 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-24 04:02:49.616517 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-24 04:02:49.628847 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-24 04:02:50.152551 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-24 04:02:50.199868 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 04:02:50.199970 | orchestrator | + deactivate 2026-03-24 04:02:50.199980 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-24 04:02:50.199987 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 04:02:50.199991 | orchestrator | + export PATH 2026-03-24 04:02:50.199996 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-24 04:02:50.200001 | orchestrator | + '[' -n '' ']' 2026-03-24 04:02:50.200005 | orchestrator | + hash -r 2026-03-24 04:02:50.200009 | orchestrator | + '[' -n '' ']' 2026-03-24 04:02:50.200014 | orchestrator | + unset VIRTUAL_ENV 2026-03-24 04:02:50.200018 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-24 04:02:50.200022 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-24 04:02:50.200026 | orchestrator | + unset -f deactivate 2026-03-24 04:02:50.200075 | orchestrator | ~ 2026-03-24 04:02:50.200081 | orchestrator | + popd 2026-03-24 04:02:50.202163 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-03-24 04:02:50.202223 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-24 04:02:50.206674 | orchestrator | + set -e 2026-03-24 04:02:50.206730 | orchestrator | + NAMESPACE=kolla/release 2026-03-24 04:02:50.206745 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-24 04:02:50.218762 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-24 04:02:50.225739 | orchestrator | /opt/configuration ~ 2026-03-24 04:02:50.225838 | orchestrator | + set -e 2026-03-24 04:02:50.225845 | orchestrator | + pushd /opt/configuration 2026-03-24 04:02:50.225850 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 04:02:50.225855 | orchestrator | + source /opt/venv/bin/activate 2026-03-24 04:02:50.225859 | orchestrator | ++ deactivate nondestructive 2026-03-24 04:02:50.225864 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:50.225868 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:50.225872 | orchestrator | ++ hash -r 2026-03-24 04:02:50.225875 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:50.225879 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-24 04:02:50.225883 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-24 04:02:50.225938 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-24 04:02:50.225945 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-24 04:02:50.225949 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-24 04:02:50.225953 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-24 04:02:50.226002 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-24 04:02:50.226007 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 04:02:50.226029 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 04:02:50.226035 | orchestrator | ++ export PATH 2026-03-24 04:02:50.226070 | orchestrator | ++ '[' -n '' ']' 2026-03-24 04:02:50.227055 | orchestrator | ++ '[' -z '' ']' 2026-03-24 04:02:50.227152 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-24 04:02:50.227173 | orchestrator | ++ PS1='(venv) ' 2026-03-24 04:02:50.227193 | orchestrator | ++ export PS1 2026-03-24 04:02:50.227212 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-24 04:02:50.227232 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-24 04:02:50.227251 | orchestrator | ++ hash -r 2026-03-24 04:02:50.227270 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-24 04:02:50.760041 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-24 04:02:50.762535 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-03-24 04:02:50.765234 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-24 04:02:50.768246 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-24 04:02:50.769762 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-24 04:02:50.785206 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-24 04:02:50.786836 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-24 04:02:50.787934 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-24 04:02:50.791177 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-24 04:02:50.821730 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-24 04:02:50.823041 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-24 04:02:50.824818 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-24 04:02:50.826238 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-24 04:02:50.831457 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-24 04:02:51.050521 | orchestrator | ++ which gilt 2026-03-24 04:02:51.051795 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-24 04:02:51.051865 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-24 04:02:51.202984 | orchestrator | osism.cfg-generics: 2026-03-24 04:02:51.262296 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-24 04:02:51.262427 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-24 04:02:51.262444 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-24 04:02:51.262457 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-24 04:02:51.723830 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-24 04:02:51.737139 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-24 04:02:52.082710 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-24 04:02:52.132985 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-24 04:02:52.133082 | orchestrator | + deactivate 2026-03-24 04:02:52.133117 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-24 04:02:52.133126 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-24 04:02:52.133133 | orchestrator | + export PATH 2026-03-24 04:02:52.133140 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-24 04:02:52.133149 | orchestrator | + '[' -n '' ']' 2026-03-24 04:02:52.133155 | orchestrator | + hash -r 2026-03-24 04:02:52.133162 | orchestrator | + '[' -n '' ']' 2026-03-24 04:02:52.133169 | orchestrator | + unset VIRTUAL_ENV 2026-03-24 04:02:52.133177 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-24 04:02:52.133184 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-24 04:02:52.133191 | orchestrator | + unset -f deactivate 2026-03-24 04:02:52.133198 | orchestrator | ~ 2026-03-24 04:02:52.133204 | orchestrator | + popd 2026-03-24 04:02:52.135839 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-24 04:02:52.189225 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-24 04:02:52.189994 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-24 04:02:52.297759 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 04:02:52.297853 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-24 04:02:52.303016 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-24 04:02:52.310825 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-24 04:02:52.376504 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-24 04:02:52.377316 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-03-24 04:02:52.478548 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-24 04:02:52.478640 | orchestrator | ++ echo true 2026-03-24 04:02:52.478663 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-24 04:02:52.481432 | orchestrator | +++ semver 2024.2 2024.2 2026-03-24 04:02:52.564587 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-24 04:02:52.565320 | orchestrator | +++ semver 2024.2 2025.1 2026-03-24 04:02:52.628527 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-24 04:02:52.628606 | orchestrator | ++ echo false 2026-03-24 04:02:52.628927 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-24 04:02:52.629115 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-24 04:02:52.629127 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-24 04:02:52.629239 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-24 04:02:52.629401 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-24 04:02:52.636396 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-24 04:02:52.636459 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-24 04:02:52.655921 | orchestrator | export RABBITMQ3TO4=true 2026-03-24 04:02:52.658739 | orchestrator | + osism update manager 2026-03-24 04:02:58.006156 | orchestrator | Collecting uv 2026-03-24 04:02:58.114110 | orchestrator | Downloading uv-0.11.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-24 04:02:58.134167 | orchestrator | Downloading uv-0.11.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.5 MB) 2026-03-24 04:02:58.918746 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5/24.5 MB 36.2 MB/s eta 0:00:00 2026-03-24 04:02:58.988132 | orchestrator | Installing collected packages: uv 2026-03-24 04:02:59.451652 | orchestrator | Successfully installed uv-0.11.0 2026-03-24 04:03:00.272102 | orchestrator | Resolved 11 packages in 577ms 2026-03-24 04:03:00.303362 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-24 04:03:00.303452 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-24 04:03:00.303912 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-24 04:03:00.414510 | orchestrator | Downloading ansible (54.5MiB) 2026-03-24 04:03:00.654910 | orchestrator | Downloaded netaddr 2026-03-24 04:03:00.769040 | orchestrator | Downloaded cryptography 2026-03-24 04:03:00.776025 | orchestrator | Downloaded ansible-core 2026-03-24 04:03:11.331964 | orchestrator | Downloaded ansible 2026-03-24 04:03:11.332066 | orchestrator | Prepared 11 packages in 11.05s 2026-03-24 04:03:11.891247 | orchestrator | Installed 11 packages in 561ms 2026-03-24 04:03:11.891345 | orchestrator | + ansible==11.11.0 2026-03-24 04:03:11.891360 | orchestrator | + ansible-core==2.18.15 2026-03-24 04:03:11.891372 | orchestrator | + cffi==2.0.0 2026-03-24 04:03:11.891384 | orchestrator | + cryptography==46.0.5 2026-03-24 04:03:11.891396 | orchestrator | + jinja2==3.1.6 2026-03-24 04:03:11.891406 | orchestrator | + markupsafe==3.0.3 2026-03-24 04:03:11.891417 | orchestrator | + netaddr==1.3.0 2026-03-24 04:03:11.891427 | orchestrator | + packaging==26.0 2026-03-24 04:03:11.891438 | orchestrator | + pycparser==3.0 2026-03-24 04:03:11.891448 | orchestrator | + pyyaml==6.0.3 2026-03-24 04:03:11.891459 | orchestrator | + resolvelib==1.0.1 2026-03-24 04:03:12.955941 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-1967873dm6kz8_/tmpv5g4sl2i/ansible-collection-services9p095eql'... 2026-03-24 04:03:14.396611 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-24 04:03:14.396681 | orchestrator | Already on 'main' 2026-03-24 04:03:14.823811 | orchestrator | Starting galaxy collection install process 2026-03-24 04:03:14.823949 | orchestrator | Process install dependency map 2026-03-24 04:03:14.823964 | orchestrator | Starting collection install process 2026-03-24 04:03:14.823976 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-24 04:03:14.823999 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-24 04:03:14.824010 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-24 04:03:15.282261 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-196827dpta8ta3/tmpvjcab_w2/ansible-playbooks-managertyixarft'... 2026-03-24 04:03:15.831702 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-24 04:03:15.831783 | orchestrator | Already on 'main' 2026-03-24 04:03:16.076615 | orchestrator | Starting galaxy collection install process 2026-03-24 04:03:16.076690 | orchestrator | Process install dependency map 2026-03-24 04:03:16.076698 | orchestrator | Starting collection install process 2026-03-24 04:03:16.076705 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-24 04:03:16.076712 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-24 04:03:16.076717 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-24 04:03:16.686267 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-24 04:03:16.686347 | orchestrator | -vvvv to see details 2026-03-24 04:03:17.071036 | orchestrator | 2026-03-24 04:03:17.071120 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-24 04:03:17.071131 | orchestrator | 2026-03-24 04:03:17.071138 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-24 04:03:20.776717 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:20.776818 | orchestrator | 2026-03-24 04:03:20.776830 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-24 04:03:20.847407 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 04:03:20.847495 | orchestrator | 2026-03-24 04:03:20.847520 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-24 04:03:22.371556 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:22.371632 | orchestrator | 2026-03-24 04:03:22.371639 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-24 04:03:22.425061 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:22.425157 | orchestrator | 2026-03-24 04:03:22.425168 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-24 04:03:22.486916 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-24 04:03:22.487008 | orchestrator | 2026-03-24 04:03:22.487019 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-24 04:03:26.363039 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-24 04:03:26.363105 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-24 04:03:26.363111 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-24 04:03:26.363126 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-24 04:03:26.363131 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-24 04:03:26.363135 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-24 04:03:26.363139 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-24 04:03:26.363143 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-24 04:03:26.363148 | orchestrator | 2026-03-24 04:03:26.363152 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-24 04:03:27.336649 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:27.336781 | orchestrator | 2026-03-24 04:03:27.336807 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-24 04:03:28.188778 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:28.188849 | orchestrator | 2026-03-24 04:03:28.188855 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-24 04:03:28.276439 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-24 04:03:28.276522 | orchestrator | 2026-03-24 04:03:28.276533 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-24 04:03:31.096038 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-24 04:03:31.096115 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-24 04:03:31.096121 | orchestrator | 2026-03-24 04:03:31.096127 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-24 04:03:31.975657 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:31.975746 | orchestrator | 2026-03-24 04:03:31.975758 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-24 04:03:32.032414 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:03:32.032485 | orchestrator | 2026-03-24 04:03:32.032493 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-24 04:03:32.117751 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-24 04:03:32.117830 | orchestrator | 2026-03-24 04:03:32.117841 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-24 04:03:33.040947 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:33.041049 | orchestrator | 2026-03-24 04:03:33.041063 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-24 04:03:33.134995 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-24 04:03:33.135088 | orchestrator | 2026-03-24 04:03:33.135100 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-24 04:03:34.884292 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-24 04:03:34.884405 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-24 04:03:34.884419 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:34.884428 | orchestrator | 2026-03-24 04:03:34.884435 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-24 04:03:35.694205 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:35.694292 | orchestrator | 2026-03-24 04:03:35.694302 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-24 04:03:35.752037 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:03:35.752107 | orchestrator | 2026-03-24 04:03:35.752114 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-24 04:03:35.850639 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-24 04:03:35.850724 | orchestrator | 2026-03-24 04:03:35.850734 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-24 04:03:36.494391 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:36.494510 | orchestrator | 2026-03-24 04:03:36.494522 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-24 04:03:37.025151 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:37.025237 | orchestrator | 2026-03-24 04:03:37.025248 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-24 04:03:38.701258 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-24 04:03:38.701364 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-24 04:03:38.701376 | orchestrator | 2026-03-24 04:03:38.701383 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-24 04:03:39.798478 | orchestrator | changed: [testbed-manager] 2026-03-24 04:03:39.798548 | orchestrator | 2026-03-24 04:03:39.798555 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-24 04:03:40.327608 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:40.327692 | orchestrator | 2026-03-24 04:03:40.327703 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-24 04:03:40.794776 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:40.794845 | orchestrator | 2026-03-24 04:03:40.794869 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-24 04:03:40.853052 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:03:40.853144 | orchestrator | 2026-03-24 04:03:40.853160 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-24 04:03:40.923559 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-24 04:03:40.923642 | orchestrator | 2026-03-24 04:03:40.923656 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-24 04:03:40.979746 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:40.979842 | orchestrator | 2026-03-24 04:03:40.979856 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-24 04:03:43.682231 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-24 04:03:43.682311 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-24 04:03:43.682318 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-24 04:03:43.682323 | orchestrator | 2026-03-24 04:03:43.682328 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-24 04:03:44.624664 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:44.624746 | orchestrator | 2026-03-24 04:03:44.624755 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-24 04:03:45.485404 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:45.485492 | orchestrator | 2026-03-24 04:03:45.485500 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-24 04:03:46.433884 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:46.433965 | orchestrator | 2026-03-24 04:03:46.434010 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-24 04:03:46.507718 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-24 04:03:46.507827 | orchestrator | 2026-03-24 04:03:46.507840 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-24 04:03:46.558344 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:46.558432 | orchestrator | 2026-03-24 04:03:46.558443 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-24 04:03:47.579502 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-24 04:03:47.579629 | orchestrator | 2026-03-24 04:03:47.579656 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-24 04:03:47.667758 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-24 04:03:47.667847 | orchestrator | 2026-03-24 04:03:47.667861 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-24 04:03:48.719907 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:48.720040 | orchestrator | 2026-03-24 04:03:48.720050 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-24 04:03:49.764676 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:49.764772 | orchestrator | 2026-03-24 04:03:49.764785 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-24 04:03:49.842811 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:03:49.842916 | orchestrator | 2026-03-24 04:03:49.842934 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-24 04:03:49.912186 | orchestrator | ok: [testbed-manager] 2026-03-24 04:03:49.912292 | orchestrator | 2026-03-24 04:03:49.912309 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-24 04:03:51.167712 | orchestrator | changed: [testbed-manager] 2026-03-24 04:03:51.167804 | orchestrator | 2026-03-24 04:03:51.167818 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-24 04:04:50.090724 | orchestrator | changed: [testbed-manager] 2026-03-24 04:04:50.090850 | orchestrator | 2026-03-24 04:04:50.090869 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-24 04:04:51.251456 | orchestrator | ok: [testbed-manager] 2026-03-24 04:04:51.251580 | orchestrator | 2026-03-24 04:04:51.251607 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-24 04:04:51.315216 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:04:51.315315 | orchestrator | 2026-03-24 04:04:51.315329 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-24 04:04:52.121871 | orchestrator | ok: [testbed-manager] 2026-03-24 04:04:52.121940 | orchestrator | 2026-03-24 04:04:52.121947 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-24 04:04:52.186892 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:04:52.186965 | orchestrator | 2026-03-24 04:04:52.186973 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-24 04:04:52.186980 | orchestrator | 2026-03-24 04:04:52.186985 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-24 04:05:06.539058 | orchestrator | changed: [testbed-manager] 2026-03-24 04:05:06.539143 | orchestrator | 2026-03-24 04:05:06.539152 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-24 04:06:06.592039 | orchestrator | Pausing for 60 seconds 2026-03-24 04:06:06.592173 | orchestrator | changed: [testbed-manager] 2026-03-24 04:06:06.592190 | orchestrator | 2026-03-24 04:06:06.592204 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-03-24 04:06:06.635270 | orchestrator | ok: [testbed-manager] 2026-03-24 04:06:06.635372 | orchestrator | 2026-03-24 04:06:06.635450 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-24 04:06:10.091096 | orchestrator | changed: [testbed-manager] 2026-03-24 04:06:10.091173 | orchestrator | 2026-03-24 04:06:10.091181 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-24 04:07:12.667670 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-24 04:07:12.667753 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-24 04:07:12.667761 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-24 04:07:12.667767 | orchestrator | changed: [testbed-manager] 2026-03-24 04:07:12.667774 | orchestrator | 2026-03-24 04:07:12.667780 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-24 04:07:23.246193 | orchestrator | changed: [testbed-manager] 2026-03-24 04:07:23.246322 | orchestrator | 2026-03-24 04:07:23.246342 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-24 04:07:23.322143 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-24 04:07:23.322250 | orchestrator | 2026-03-24 04:07:23.322261 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-24 04:07:23.322268 | orchestrator | 2026-03-24 04:07:23.322275 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-24 04:07:23.385867 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:07:23.385975 | orchestrator | 2026-03-24 04:07:23.385994 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-24 04:07:23.446973 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-24 04:07:23.447069 | orchestrator | 2026-03-24 04:07:23.447079 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-24 04:07:24.487708 | orchestrator | changed: [testbed-manager] 2026-03-24 04:07:24.487802 | orchestrator | 2026-03-24 04:07:24.487812 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-24 04:07:27.835706 | orchestrator | ok: [testbed-manager] 2026-03-24 04:07:27.835805 | orchestrator | 2026-03-24 04:07:27.835820 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-24 04:07:27.917147 | orchestrator | ok: [testbed-manager] => { 2026-03-24 04:07:27.917288 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-24 04:07:27.917315 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-24 04:07:27.917332 | orchestrator | "Checking running containers against expected versions...", 2026-03-24 04:07:27.917349 | orchestrator | "", 2026-03-24 04:07:27.917366 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-24 04:07:27.917382 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-24 04:07:27.917398 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.917413 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-24 04:07:27.917429 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.917447 | orchestrator | "", 2026-03-24 04:07:27.917463 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-24 04:07:27.917479 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-24 04:07:27.917493 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.917510 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-24 04:07:27.917527 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.917544 | orchestrator | "", 2026-03-24 04:07:27.917560 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-24 04:07:27.917576 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-24 04:07:27.917702 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.917717 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-24 04:07:27.917729 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.917740 | orchestrator | "", 2026-03-24 04:07:27.917751 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-24 04:07:27.917763 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-24 04:07:27.917774 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.917785 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-24 04:07:27.917796 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.917807 | orchestrator | "", 2026-03-24 04:07:27.917819 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-24 04:07:27.917830 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-24 04:07:27.917841 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.917852 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-24 04:07:27.917863 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.917874 | orchestrator | "", 2026-03-24 04:07:27.917885 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-24 04:07:27.917923 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.917934 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.917946 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.917957 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.917967 | orchestrator | "", 2026-03-24 04:07:27.917977 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-24 04:07:27.917987 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-24 04:07:27.917996 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918006 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-24 04:07:27.918072 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918083 | orchestrator | "", 2026-03-24 04:07:27.918093 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-24 04:07:27.918103 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-24 04:07:27.918112 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918133 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-24 04:07:27.918143 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918153 | orchestrator | "", 2026-03-24 04:07:27.918163 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-24 04:07:27.918173 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-24 04:07:27.918182 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918192 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-24 04:07:27.918201 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918211 | orchestrator | "", 2026-03-24 04:07:27.918226 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-24 04:07:27.918236 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-24 04:07:27.918246 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918256 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-24 04:07:27.918266 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918275 | orchestrator | "", 2026-03-24 04:07:27.918285 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-24 04:07:27.918294 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918304 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918314 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918323 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918333 | orchestrator | "", 2026-03-24 04:07:27.918342 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-24 04:07:27.918352 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918362 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918371 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918381 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918391 | orchestrator | "", 2026-03-24 04:07:27.918400 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-24 04:07:27.918410 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918419 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918429 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918444 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918461 | orchestrator | "", 2026-03-24 04:07:27.918477 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-24 04:07:27.918493 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918508 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918523 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918564 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918581 | orchestrator | "", 2026-03-24 04:07:27.918624 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-24 04:07:27.918641 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918672 | orchestrator | " Enabled: true", 2026-03-24 04:07:27.918688 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-24 04:07:27.918703 | orchestrator | " Status: ✅ MATCH", 2026-03-24 04:07:27.918718 | orchestrator | "", 2026-03-24 04:07:27.918734 | orchestrator | "=== Summary ===", 2026-03-24 04:07:27.918750 | orchestrator | "Errors (version mismatches): 0", 2026-03-24 04:07:27.918767 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-24 04:07:27.918783 | orchestrator | "", 2026-03-24 04:07:27.918799 | orchestrator | "✅ All running containers match expected versions!" 2026-03-24 04:07:27.918815 | orchestrator | ] 2026-03-24 04:07:27.918832 | orchestrator | } 2026-03-24 04:07:27.918843 | orchestrator | 2026-03-24 04:07:27.918853 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-24 04:07:27.974452 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:07:27.974556 | orchestrator | 2026-03-24 04:07:27.974575 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:07:27.974625 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-03-24 04:07:27.974638 | orchestrator | 2026-03-24 04:07:40.332983 | orchestrator | 2026-03-24 04:07:40 | INFO  | Task 479047d7-d7fb-4bbc-acb4-ec217a3fd6ec (sync inventory) is running in background. Output coming soon. 2026-03-24 04:08:07.503286 | orchestrator | 2026-03-24 04:07:41 | INFO  | Starting group_vars file reorganization 2026-03-24 04:08:07.503384 | orchestrator | 2026-03-24 04:07:41 | INFO  | Moved 0 file(s) to their respective directories 2026-03-24 04:08:07.503393 | orchestrator | 2026-03-24 04:07:41 | INFO  | Group_vars file reorganization completed 2026-03-24 04:08:07.503413 | orchestrator | 2026-03-24 04:07:44 | INFO  | Starting variable preparation from inventory 2026-03-24 04:08:07.503418 | orchestrator | 2026-03-24 04:07:47 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-24 04:08:07.503423 | orchestrator | 2026-03-24 04:07:47 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-24 04:08:07.503438 | orchestrator | 2026-03-24 04:07:47 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-24 04:08:07.503442 | orchestrator | 2026-03-24 04:07:47 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-24 04:08:07.503452 | orchestrator | 2026-03-24 04:07:47 | INFO  | Variable preparation completed 2026-03-24 04:08:07.503456 | orchestrator | 2026-03-24 04:07:49 | INFO  | Starting inventory overwrite handling 2026-03-24 04:08:07.503460 | orchestrator | 2026-03-24 04:07:49 | INFO  | Handling group overwrites in 99-overwrite 2026-03-24 04:08:07.503465 | orchestrator | 2026-03-24 04:07:49 | INFO  | Removing group frr:children from 60-generic 2026-03-24 04:08:07.503469 | orchestrator | 2026-03-24 04:07:49 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-24 04:08:07.503474 | orchestrator | 2026-03-24 04:07:49 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-24 04:08:07.503478 | orchestrator | 2026-03-24 04:07:49 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-24 04:08:07.503482 | orchestrator | 2026-03-24 04:07:49 | INFO  | Handling group overwrites in 20-roles 2026-03-24 04:08:07.503486 | orchestrator | 2026-03-24 04:07:49 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-24 04:08:07.503490 | orchestrator | 2026-03-24 04:07:49 | INFO  | Removed 5 group(s) in total 2026-03-24 04:08:07.503494 | orchestrator | 2026-03-24 04:07:49 | INFO  | Inventory overwrite handling completed 2026-03-24 04:08:07.503498 | orchestrator | 2026-03-24 04:07:50 | INFO  | Starting merge of inventory files 2026-03-24 04:08:07.503502 | orchestrator | 2026-03-24 04:07:50 | INFO  | Inventory files merged successfully 2026-03-24 04:08:07.503522 | orchestrator | 2026-03-24 04:07:55 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-24 04:08:07.503526 | orchestrator | 2026-03-24 04:08:06 | INFO  | Successfully wrote ClusterShell configuration 2026-03-24 04:08:07.793225 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-24 04:08:07.793293 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-24 04:08:07.793300 | orchestrator | + local max_attempts=60 2026-03-24 04:08:07.793306 | orchestrator | + local name=kolla-ansible 2026-03-24 04:08:07.793310 | orchestrator | + local attempt_num=1 2026-03-24 04:08:07.793315 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-24 04:08:07.826505 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 04:08:07.826587 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-24 04:08:07.826597 | orchestrator | + local max_attempts=60 2026-03-24 04:08:07.826607 | orchestrator | + local name=osism-ansible 2026-03-24 04:08:07.826614 | orchestrator | + local attempt_num=1 2026-03-24 04:08:07.826969 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-24 04:08:07.861830 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-24 04:08:07.861898 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-24 04:08:08.041196 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-24 04:08:08.041285 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-24 04:08:08.041296 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-24 04:08:08.041304 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-24 04:08:08.041316 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-03-24 04:08:08.041324 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-03-24 04:08:08.041332 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-03-24 04:08:08.041339 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up About a minute (healthy) 2026-03-24 04:08:08.041346 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 26 seconds ago 2026-03-24 04:08:08.041362 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-03-24 04:08:08.041376 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-03-24 04:08:08.041384 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-03-24 04:08:08.041391 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-24 04:08:08.041422 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-03-24 04:08:08.041430 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-03-24 04:08:08.041437 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-03-24 04:08:08.046408 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-24 04:08:08.046482 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-24 04:08:08.046491 | orchestrator | + osism apply facts 2026-03-24 04:08:20.290237 | orchestrator | 2026-03-24 04:08:20 | INFO  | Task 405530b4-819e-432b-bd6e-67fcca89b568 (facts) was prepared for execution. 2026-03-24 04:08:20.290318 | orchestrator | 2026-03-24 04:08:20 | INFO  | It takes a moment until task 405530b4-819e-432b-bd6e-67fcca89b568 (facts) has been started and output is visible here. 2026-03-24 04:08:42.731189 | orchestrator | 2026-03-24 04:08:42.731318 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-24 04:08:42.731337 | orchestrator | 2026-03-24 04:08:42.731350 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-24 04:08:42.731362 | orchestrator | Tuesday 24 March 2026 04:08:26 +0000 (0:00:02.546) 0:00:02.546 ********* 2026-03-24 04:08:42.731373 | orchestrator | ok: [testbed-manager] 2026-03-24 04:08:42.731386 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:08:42.731397 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:08:42.731408 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:08:42.731418 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:08:42.731429 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:08:42.731440 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:08:42.731451 | orchestrator | 2026-03-24 04:08:42.731462 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-24 04:08:42.731473 | orchestrator | Tuesday 24 March 2026 04:08:30 +0000 (0:00:03.596) 0:00:06.142 ********* 2026-03-24 04:08:42.731484 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:08:42.731496 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:08:42.731507 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:08:42.731518 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:08:42.731528 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:08:42.731539 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:08:42.731550 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:08:42.731561 | orchestrator | 2026-03-24 04:08:42.731572 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-24 04:08:42.731583 | orchestrator | 2026-03-24 04:08:42.731593 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-24 04:08:42.731606 | orchestrator | Tuesday 24 March 2026 04:08:32 +0000 (0:00:02.344) 0:00:08.487 ********* 2026-03-24 04:08:42.731625 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:08:42.731671 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:08:42.731691 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:08:42.731711 | orchestrator | ok: [testbed-manager] 2026-03-24 04:08:42.731736 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:08:42.731790 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:08:42.731809 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:08:42.731829 | orchestrator | 2026-03-24 04:08:42.731847 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-24 04:08:42.731867 | orchestrator | 2026-03-24 04:08:42.731886 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-24 04:08:42.731905 | orchestrator | Tuesday 24 March 2026 04:08:39 +0000 (0:00:07.117) 0:00:15.604 ********* 2026-03-24 04:08:42.731924 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:08:42.731975 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:08:42.731997 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:08:42.732016 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:08:42.732034 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:08:42.732052 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:08:42.732124 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:08:42.732138 | orchestrator | 2026-03-24 04:08:42.732149 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:08:42.732161 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:08:42.732173 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:08:42.732184 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:08:42.732195 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:08:42.732206 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:08:42.732216 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:08:42.732227 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:08:42.732238 | orchestrator | 2026-03-24 04:08:42.732249 | orchestrator | 2026-03-24 04:08:42.732259 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:08:42.732270 | orchestrator | Tuesday 24 March 2026 04:08:42 +0000 (0:00:02.692) 0:00:18.297 ********* 2026-03-24 04:08:42.732281 | orchestrator | =============================================================================== 2026-03-24 04:08:42.732292 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.12s 2026-03-24 04:08:42.732303 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.60s 2026-03-24 04:08:42.732314 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.69s 2026-03-24 04:08:42.732325 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.34s 2026-03-24 04:08:42.974231 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-24 04:08:43.026095 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 04:08:43.027603 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-24 04:08:43.062122 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-03-24 04:08:43.062204 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-03-24 04:08:43.065634 | orchestrator | + set -e 2026-03-24 04:08:43.065701 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-03-24 04:08:43.065708 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-24 04:08:43.072487 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-24 04:08:43.077430 | orchestrator | + set -e 2026-03-24 04:08:43.077511 | orchestrator | 2026-03-24 04:08:43.077523 | orchestrator | # UPGRADE SERVICES 2026-03-24 04:08:43.077530 | orchestrator | 2026-03-24 04:08:43.077536 | orchestrator | + echo 2026-03-24 04:08:43.077543 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-24 04:08:43.077549 | orchestrator | + echo 2026-03-24 04:08:43.077554 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 04:08:43.078121 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 04:08:43.078896 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 04:08:43.078929 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 04:08:43.078936 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 04:08:43.078943 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 04:08:43.078952 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 04:08:43.078959 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 04:08:43.078990 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 04:08:43.078996 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 04:08:43.079003 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 04:08:43.079009 | orchestrator | ++ export ARA=false 2026-03-24 04:08:43.079015 | orchestrator | ++ ARA=false 2026-03-24 04:08:43.079022 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 04:08:43.079028 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 04:08:43.079034 | orchestrator | ++ export TEMPEST=false 2026-03-24 04:08:43.079040 | orchestrator | ++ TEMPEST=false 2026-03-24 04:08:43.079046 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 04:08:43.079053 | orchestrator | ++ IS_ZUUL=true 2026-03-24 04:08:43.079059 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 04:08:43.079066 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 04:08:43.079072 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 04:08:43.079079 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 04:08:43.079085 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 04:08:43.079091 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 04:08:43.079097 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 04:08:43.079103 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 04:08:43.079109 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 04:08:43.079115 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 04:08:43.079121 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-24 04:08:43.079128 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-24 04:08:43.079134 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-24 04:08:43.079140 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-24 04:08:43.079146 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-24 04:08:43.082647 | orchestrator | + set -e 2026-03-24 04:08:43.082694 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 04:08:43.083399 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 04:08:43.083460 | orchestrator | ++ INTERACTIVE=false 2026-03-24 04:08:43.083471 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 04:08:43.083478 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 04:08:43.083485 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 04:08:43.083490 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 04:08:43.083496 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 04:08:43.083502 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 04:08:43.083508 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 04:08:43.083515 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 04:08:43.083522 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 04:08:43.083547 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 04:08:43.083553 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 04:08:43.083558 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 04:08:43.083562 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 04:08:43.083566 | orchestrator | ++ export ARA=false 2026-03-24 04:08:43.083571 | orchestrator | ++ ARA=false 2026-03-24 04:08:43.083575 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 04:08:43.083579 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 04:08:43.083583 | orchestrator | ++ export TEMPEST=false 2026-03-24 04:08:43.083587 | orchestrator | ++ TEMPEST=false 2026-03-24 04:08:43.083591 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 04:08:43.083596 | orchestrator | ++ IS_ZUUL=true 2026-03-24 04:08:43.083602 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 04:08:43.083609 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 04:08:43.083615 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 04:08:43.083622 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 04:08:43.083628 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 04:08:43.083634 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 04:08:43.083642 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 04:08:43.083646 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 04:08:43.083650 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 04:08:43.083655 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 04:08:43.083783 | orchestrator | 2026-03-24 04:08:43.083795 | orchestrator | # PULL IMAGES 2026-03-24 04:08:43.083799 | orchestrator | 2026-03-24 04:08:43.083803 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-24 04:08:43.083806 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-24 04:08:43.083810 | orchestrator | + echo 2026-03-24 04:08:43.083814 | orchestrator | + echo '# PULL IMAGES' 2026-03-24 04:08:43.083818 | orchestrator | + echo 2026-03-24 04:08:43.084051 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-24 04:08:43.116254 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 04:08:43.116319 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-24 04:08:44.826183 | orchestrator | 2026-03-24 04:08:44 | INFO  | Trying to run play pull-images in environment custom 2026-03-24 04:08:54.947052 | orchestrator | 2026-03-24 04:08:54 | INFO  | Task 7f8d3da6-8b59-465a-ab28-dd2de2993217 (pull-images) was prepared for execution. 2026-03-24 04:08:54.947141 | orchestrator | 2026-03-24 04:08:54 | INFO  | Task 7f8d3da6-8b59-465a-ab28-dd2de2993217 is running in background. No more output. Check ARA for logs. 2026-03-24 04:08:55.405381 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-24 04:08:55.414263 | orchestrator | + set -e 2026-03-24 04:08:55.414343 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 04:08:55.414354 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 04:08:55.414362 | orchestrator | ++ INTERACTIVE=false 2026-03-24 04:08:55.414368 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 04:08:55.414375 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 04:08:55.414382 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-24 04:08:55.415382 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-24 04:08:55.427812 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-24 04:08:55.427881 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-24 04:08:55.428956 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-03-24 04:08:55.477624 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-24 04:08:55.477703 | orchestrator | + osism apply frr 2026-03-24 04:09:07.717655 | orchestrator | 2026-03-24 04:09:07 | INFO  | Task 0ecd077e-afbf-43d8-bcbb-def5bbc57c34 (frr) was prepared for execution. 2026-03-24 04:09:07.717763 | orchestrator | 2026-03-24 04:09:07 | INFO  | It takes a moment until task 0ecd077e-afbf-43d8-bcbb-def5bbc57c34 (frr) has been started and output is visible here. 2026-03-24 04:09:37.969612 | orchestrator | 2026-03-24 04:09:37.969734 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-24 04:09:37.969750 | orchestrator | 2026-03-24 04:09:37.969762 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-24 04:09:37.969774 | orchestrator | Tuesday 24 March 2026 04:09:15 +0000 (0:00:02.727) 0:00:02.727 ********* 2026-03-24 04:09:37.969786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 04:09:37.969798 | orchestrator | 2026-03-24 04:09:37.969810 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-24 04:09:37.969821 | orchestrator | Tuesday 24 March 2026 04:09:16 +0000 (0:00:01.893) 0:00:04.620 ********* 2026-03-24 04:09:37.969832 | orchestrator | ok: [testbed-manager] 2026-03-24 04:09:37.969914 | orchestrator | 2026-03-24 04:09:37.969926 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-24 04:09:37.969938 | orchestrator | Tuesday 24 March 2026 04:09:19 +0000 (0:00:02.165) 0:00:06.786 ********* 2026-03-24 04:09:37.969949 | orchestrator | ok: [testbed-manager] 2026-03-24 04:09:37.969959 | orchestrator | 2026-03-24 04:09:37.969970 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-24 04:09:37.969981 | orchestrator | Tuesday 24 March 2026 04:09:21 +0000 (0:00:02.374) 0:00:09.160 ********* 2026-03-24 04:09:37.969992 | orchestrator | ok: [testbed-manager] 2026-03-24 04:09:37.970004 | orchestrator | 2026-03-24 04:09:37.970075 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-24 04:09:37.970088 | orchestrator | Tuesday 24 March 2026 04:09:23 +0000 (0:00:01.850) 0:00:11.011 ********* 2026-03-24 04:09:37.970099 | orchestrator | ok: [testbed-manager] 2026-03-24 04:09:37.970110 | orchestrator | 2026-03-24 04:09:37.970130 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-24 04:09:37.970141 | orchestrator | Tuesday 24 March 2026 04:09:25 +0000 (0:00:01.845) 0:00:12.856 ********* 2026-03-24 04:09:37.970154 | orchestrator | ok: [testbed-manager] 2026-03-24 04:09:37.970166 | orchestrator | 2026-03-24 04:09:37.970179 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-24 04:09:37.970192 | orchestrator | Tuesday 24 March 2026 04:09:27 +0000 (0:00:02.313) 0:00:15.170 ********* 2026-03-24 04:09:37.970204 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:09:37.970243 | orchestrator | 2026-03-24 04:09:37.970257 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-24 04:09:37.970269 | orchestrator | Tuesday 24 March 2026 04:09:28 +0000 (0:00:01.127) 0:00:16.298 ********* 2026-03-24 04:09:37.970282 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:09:37.970295 | orchestrator | 2026-03-24 04:09:37.970308 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-24 04:09:37.970320 | orchestrator | Tuesday 24 March 2026 04:09:29 +0000 (0:00:01.151) 0:00:17.450 ********* 2026-03-24 04:09:37.970332 | orchestrator | ok: [testbed-manager] 2026-03-24 04:09:37.970346 | orchestrator | 2026-03-24 04:09:37.970358 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-24 04:09:37.970371 | orchestrator | Tuesday 24 March 2026 04:09:31 +0000 (0:00:01.864) 0:00:19.314 ********* 2026-03-24 04:09:37.970384 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-24 04:09:37.970397 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-24 04:09:37.970410 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-24 04:09:37.970424 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-24 04:09:37.970436 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-24 04:09:37.970449 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-24 04:09:37.970461 | orchestrator | 2026-03-24 04:09:37.970490 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-24 04:09:37.970504 | orchestrator | Tuesday 24 March 2026 04:09:35 +0000 (0:00:03.375) 0:00:22.690 ********* 2026-03-24 04:09:37.970515 | orchestrator | ok: [testbed-manager] 2026-03-24 04:09:37.970526 | orchestrator | 2026-03-24 04:09:37.970537 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:09:37.970548 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 04:09:37.970559 | orchestrator | 2026-03-24 04:09:37.970570 | orchestrator | 2026-03-24 04:09:37.970581 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:09:37.970592 | orchestrator | Tuesday 24 March 2026 04:09:37 +0000 (0:00:02.612) 0:00:25.302 ********* 2026-03-24 04:09:37.970602 | orchestrator | =============================================================================== 2026-03-24 04:09:37.970613 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.38s 2026-03-24 04:09:37.970624 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.61s 2026-03-24 04:09:37.970635 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.37s 2026-03-24 04:09:37.970645 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.31s 2026-03-24 04:09:37.970656 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.17s 2026-03-24 04:09:37.970667 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.89s 2026-03-24 04:09:37.970677 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.86s 2026-03-24 04:09:37.970688 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.85s 2026-03-24 04:09:37.970717 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.85s 2026-03-24 04:09:37.970728 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.15s 2026-03-24 04:09:37.970739 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.13s 2026-03-24 04:09:38.262814 | orchestrator | + osism apply kubernetes 2026-03-24 04:09:40.372150 | orchestrator | 2026-03-24 04:09:40 | INFO  | Task 0bd079cf-5f67-4da6-a8ac-f818c54845f7 (kubernetes) was prepared for execution. 2026-03-24 04:09:40.372278 | orchestrator | 2026-03-24 04:09:40 | INFO  | It takes a moment until task 0bd079cf-5f67-4da6-a8ac-f818c54845f7 (kubernetes) has been started and output is visible here. 2026-03-24 04:10:22.810447 | orchestrator | 2026-03-24 04:10:22.810593 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-24 04:10:22.810618 | orchestrator | 2026-03-24 04:10:22.810654 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-24 04:10:22.810675 | orchestrator | Tuesday 24 March 2026 04:09:46 +0000 (0:00:02.122) 0:00:02.122 ********* 2026-03-24 04:10:22.810691 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:10:22.810709 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:10:22.810725 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:10:22.810742 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:10:22.810757 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:10:22.810772 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:10:22.810789 | orchestrator | 2026-03-24 04:10:22.810805 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-24 04:10:22.810821 | orchestrator | Tuesday 24 March 2026 04:09:50 +0000 (0:00:04.084) 0:00:06.207 ********* 2026-03-24 04:10:22.810837 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.810854 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.810870 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.810887 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.810904 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.810948 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.810968 | orchestrator | 2026-03-24 04:10:22.810985 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-24 04:10:22.811003 | orchestrator | Tuesday 24 March 2026 04:09:52 +0000 (0:00:01.862) 0:00:08.070 ********* 2026-03-24 04:10:22.811019 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.811038 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.811055 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.811073 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.811090 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.811106 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.811123 | orchestrator | 2026-03-24 04:10:22.811140 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-24 04:10:22.811159 | orchestrator | Tuesday 24 March 2026 04:09:54 +0000 (0:00:01.798) 0:00:09.869 ********* 2026-03-24 04:10:22.811177 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:10:22.811195 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:10:22.811210 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:10:22.811222 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:10:22.811234 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:10:22.811245 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:10:22.811256 | orchestrator | 2026-03-24 04:10:22.811267 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-24 04:10:22.811278 | orchestrator | Tuesday 24 March 2026 04:09:57 +0000 (0:00:03.091) 0:00:12.961 ********* 2026-03-24 04:10:22.811289 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:10:22.811300 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:10:22.811311 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:10:22.811320 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:10:22.811330 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:10:22.811339 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:10:22.811349 | orchestrator | 2026-03-24 04:10:22.811358 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-24 04:10:22.811368 | orchestrator | Tuesday 24 March 2026 04:09:59 +0000 (0:00:02.432) 0:00:15.393 ********* 2026-03-24 04:10:22.811377 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:10:22.811387 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:10:22.811396 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:10:22.811406 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:10:22.811415 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:10:22.811451 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:10:22.811461 | orchestrator | 2026-03-24 04:10:22.811470 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-24 04:10:22.811480 | orchestrator | Tuesday 24 March 2026 04:10:02 +0000 (0:00:02.084) 0:00:17.478 ********* 2026-03-24 04:10:22.811489 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.811499 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.811509 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.811519 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.811528 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.811537 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.811547 | orchestrator | 2026-03-24 04:10:22.811556 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-24 04:10:22.811566 | orchestrator | Tuesday 24 March 2026 04:10:03 +0000 (0:00:01.895) 0:00:19.374 ********* 2026-03-24 04:10:22.811575 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.811585 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.811594 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.811603 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.811613 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.811622 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.811631 | orchestrator | 2026-03-24 04:10:22.811641 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-24 04:10:22.811654 | orchestrator | Tuesday 24 March 2026 04:10:05 +0000 (0:00:01.690) 0:00:21.065 ********* 2026-03-24 04:10:22.811671 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 04:10:22.811686 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 04:10:22.811702 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.811717 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 04:10:22.811749 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 04:10:22.811768 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.811784 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 04:10:22.811801 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 04:10:22.811815 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.811825 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 04:10:22.811835 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 04:10:22.811844 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.811875 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 04:10:22.811885 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 04:10:22.811895 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.811904 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-24 04:10:22.811914 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-24 04:10:22.811953 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.811964 | orchestrator | 2026-03-24 04:10:22.811974 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-24 04:10:22.811983 | orchestrator | Tuesday 24 March 2026 04:10:07 +0000 (0:00:01.878) 0:00:22.943 ********* 2026-03-24 04:10:22.811993 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.812005 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.812021 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.812035 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.812045 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.812054 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.812063 | orchestrator | 2026-03-24 04:10:22.812083 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-24 04:10:22.812094 | orchestrator | Tuesday 24 March 2026 04:10:09 +0000 (0:00:02.023) 0:00:24.967 ********* 2026-03-24 04:10:22.812104 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:10:22.812113 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:10:22.812123 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:10:22.812132 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:10:22.812142 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:10:22.812151 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:10:22.812161 | orchestrator | 2026-03-24 04:10:22.812170 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-24 04:10:22.812180 | orchestrator | Tuesday 24 March 2026 04:10:11 +0000 (0:00:01.994) 0:00:26.962 ********* 2026-03-24 04:10:22.812189 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:10:22.812199 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:10:22.812208 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:10:22.812218 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:10:22.812233 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:10:22.812243 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:10:22.812252 | orchestrator | 2026-03-24 04:10:22.812262 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-24 04:10:22.812271 | orchestrator | Tuesday 24 March 2026 04:10:14 +0000 (0:00:02.946) 0:00:29.908 ********* 2026-03-24 04:10:22.812284 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.812300 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.812316 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.812332 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.812347 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.812363 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.812378 | orchestrator | 2026-03-24 04:10:22.812394 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-24 04:10:22.812410 | orchestrator | Tuesday 24 March 2026 04:10:16 +0000 (0:00:02.026) 0:00:31.935 ********* 2026-03-24 04:10:22.812425 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.812442 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.812458 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.812474 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.812491 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.812507 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.812523 | orchestrator | 2026-03-24 04:10:22.812536 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-24 04:10:22.812548 | orchestrator | Tuesday 24 March 2026 04:10:18 +0000 (0:00:02.115) 0:00:34.051 ********* 2026-03-24 04:10:22.812557 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.812567 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.812576 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.812586 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.812595 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.812604 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.812614 | orchestrator | 2026-03-24 04:10:22.812628 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-24 04:10:22.812638 | orchestrator | Tuesday 24 March 2026 04:10:20 +0000 (0:00:01.770) 0:00:35.822 ********* 2026-03-24 04:10:22.812648 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-24 04:10:22.812657 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-24 04:10:22.812667 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.812676 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-24 04:10:22.812686 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-24 04:10:22.812695 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.812704 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-24 04:10:22.812714 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-24 04:10:22.812732 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:10:22.812742 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-24 04:10:22.812752 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-24 04:10:22.812761 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:10:22.812770 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-24 04:10:22.812780 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-24 04:10:22.812789 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:10:22.812799 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-24 04:10:22.812808 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-24 04:10:22.812817 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:10:22.812827 | orchestrator | 2026-03-24 04:10:22.812836 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-24 04:10:22.812846 | orchestrator | Tuesday 24 March 2026 04:10:22 +0000 (0:00:01.941) 0:00:37.763 ********* 2026-03-24 04:10:22.812855 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:10:22.812865 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:10:22.812885 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:11:59.449635 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:11:59.449748 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.449762 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.449773 | orchestrator | 2026-03-24 04:11:59.449785 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-24 04:11:59.449797 | orchestrator | Tuesday 24 March 2026 04:10:24 +0000 (0:00:01.745) 0:00:39.509 ********* 2026-03-24 04:11:59.449808 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:11:59.449818 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:11:59.449828 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:11:59.449838 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:11:59.449848 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.449858 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.449867 | orchestrator | 2026-03-24 04:11:59.449877 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-24 04:11:59.449886 | orchestrator | 2026-03-24 04:11:59.449896 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-24 04:11:59.449907 | orchestrator | Tuesday 24 March 2026 04:10:26 +0000 (0:00:02.569) 0:00:42.079 ********* 2026-03-24 04:11:59.449916 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.449953 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.449963 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.449973 | orchestrator | 2026-03-24 04:11:59.449984 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-24 04:11:59.449994 | orchestrator | Tuesday 24 March 2026 04:10:28 +0000 (0:00:01.684) 0:00:43.763 ********* 2026-03-24 04:11:59.450004 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.450185 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.450204 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.450215 | orchestrator | 2026-03-24 04:11:59.450226 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-24 04:11:59.450236 | orchestrator | Tuesday 24 March 2026 04:10:30 +0000 (0:00:02.112) 0:00:45.876 ********* 2026-03-24 04:11:59.450247 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:11:59.450257 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:11:59.450267 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:11:59.450275 | orchestrator | 2026-03-24 04:11:59.450304 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-24 04:11:59.450314 | orchestrator | Tuesday 24 March 2026 04:10:32 +0000 (0:00:02.121) 0:00:47.997 ********* 2026-03-24 04:11:59.450324 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.450333 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.450343 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.450374 | orchestrator | 2026-03-24 04:11:59.450385 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-24 04:11:59.450396 | orchestrator | Tuesday 24 March 2026 04:10:34 +0000 (0:00:01.900) 0:00:49.898 ********* 2026-03-24 04:11:59.450406 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:11:59.450417 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.450427 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.450437 | orchestrator | 2026-03-24 04:11:59.450447 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-24 04:11:59.450458 | orchestrator | Tuesday 24 March 2026 04:10:35 +0000 (0:00:01.363) 0:00:51.261 ********* 2026-03-24 04:11:59.450468 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.450478 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.450487 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.450496 | orchestrator | 2026-03-24 04:11:59.450506 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-24 04:11:59.450515 | orchestrator | Tuesday 24 March 2026 04:10:37 +0000 (0:00:01.697) 0:00:52.959 ********* 2026-03-24 04:11:59.450525 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.450534 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.450544 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.450551 | orchestrator | 2026-03-24 04:11:59.450560 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-24 04:11:59.450569 | orchestrator | Tuesday 24 March 2026 04:10:39 +0000 (0:00:02.194) 0:00:55.154 ********* 2026-03-24 04:11:59.450578 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:11:59.450587 | orchestrator | 2026-03-24 04:11:59.450596 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-24 04:11:59.450604 | orchestrator | Tuesday 24 March 2026 04:10:41 +0000 (0:00:01.961) 0:00:57.116 ********* 2026-03-24 04:11:59.450613 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.450621 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.450629 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.450638 | orchestrator | 2026-03-24 04:11:59.450647 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-24 04:11:59.450655 | orchestrator | Tuesday 24 March 2026 04:10:43 +0000 (0:00:02.283) 0:00:59.400 ********* 2026-03-24 04:11:59.450663 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.450672 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.450680 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.450689 | orchestrator | 2026-03-24 04:11:59.450697 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-24 04:11:59.450704 | orchestrator | Tuesday 24 March 2026 04:10:45 +0000 (0:00:01.715) 0:01:01.116 ********* 2026-03-24 04:11:59.450712 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.450719 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.450726 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:11:59.450734 | orchestrator | 2026-03-24 04:11:59.450742 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-24 04:11:59.450749 | orchestrator | Tuesday 24 March 2026 04:10:47 +0000 (0:00:01.933) 0:01:03.049 ********* 2026-03-24 04:11:59.450757 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.450764 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.450771 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:11:59.450779 | orchestrator | 2026-03-24 04:11:59.450786 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-24 04:11:59.450794 | orchestrator | Tuesday 24 March 2026 04:10:50 +0000 (0:00:02.501) 0:01:05.551 ********* 2026-03-24 04:11:59.450801 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:11:59.450810 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.450840 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.450849 | orchestrator | 2026-03-24 04:11:59.450856 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-24 04:11:59.450863 | orchestrator | Tuesday 24 March 2026 04:10:51 +0000 (0:00:01.434) 0:01:06.985 ********* 2026-03-24 04:11:59.450882 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:11:59.450890 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.450897 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.450905 | orchestrator | 2026-03-24 04:11:59.450912 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-24 04:11:59.450920 | orchestrator | Tuesday 24 March 2026 04:10:53 +0000 (0:00:01.625) 0:01:08.611 ********* 2026-03-24 04:11:59.450928 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:11:59.450935 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:11:59.450942 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:11:59.450950 | orchestrator | 2026-03-24 04:11:59.450957 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-24 04:11:59.450964 | orchestrator | Tuesday 24 March 2026 04:10:55 +0000 (0:00:02.126) 0:01:10.738 ********* 2026-03-24 04:11:59.450972 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.450980 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.450987 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.450995 | orchestrator | 2026-03-24 04:11:59.451002 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-24 04:11:59.451010 | orchestrator | Tuesday 24 March 2026 04:10:57 +0000 (0:00:01.898) 0:01:12.636 ********* 2026-03-24 04:11:59.451018 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.451026 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.451033 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.451041 | orchestrator | 2026-03-24 04:11:59.451049 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-24 04:11:59.451057 | orchestrator | Tuesday 24 March 2026 04:10:58 +0000 (0:00:01.401) 0:01:14.038 ********* 2026-03-24 04:11:59.451084 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-24 04:11:59.451096 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-24 04:11:59.451104 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-24 04:11:59.451112 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-24 04:11:59.451120 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-24 04:11:59.451128 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-24 04:11:59.451136 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.451143 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.451151 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.451160 | orchestrator | 2026-03-24 04:11:59.451169 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-24 04:11:59.451177 | orchestrator | Tuesday 24 March 2026 04:11:22 +0000 (0:00:23.380) 0:01:37.419 ********* 2026-03-24 04:11:59.451185 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:11:59.451194 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:11:59.451202 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:11:59.451209 | orchestrator | 2026-03-24 04:11:59.451217 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-24 04:11:59.451225 | orchestrator | Tuesday 24 March 2026 04:11:23 +0000 (0:00:01.379) 0:01:38.798 ********* 2026-03-24 04:11:59.451232 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:11:59.451240 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:11:59.451249 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:11:59.451256 | orchestrator | 2026-03-24 04:11:59.451264 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-24 04:11:59.451282 | orchestrator | Tuesday 24 March 2026 04:11:25 +0000 (0:00:02.088) 0:01:40.886 ********* 2026-03-24 04:11:59.451290 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.451298 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.451305 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.451312 | orchestrator | 2026-03-24 04:11:59.451318 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-24 04:11:59.451325 | orchestrator | Tuesday 24 March 2026 04:11:27 +0000 (0:00:02.277) 0:01:43.164 ********* 2026-03-24 04:11:59.451332 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:11:59.451341 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:11:59.451348 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:11:59.451355 | orchestrator | 2026-03-24 04:11:59.451364 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-24 04:11:59.451371 | orchestrator | Tuesday 24 March 2026 04:11:54 +0000 (0:00:26.510) 0:02:09.674 ********* 2026-03-24 04:11:59.451379 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.451386 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.451394 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.451401 | orchestrator | 2026-03-24 04:11:59.451409 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-24 04:11:59.451416 | orchestrator | Tuesday 24 March 2026 04:11:56 +0000 (0:00:01.830) 0:02:11.505 ********* 2026-03-24 04:11:59.451423 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:11:59.451430 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:11:59.451437 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:11:59.451445 | orchestrator | 2026-03-24 04:11:59.451452 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-24 04:11:59.451460 | orchestrator | Tuesday 24 March 2026 04:11:57 +0000 (0:00:01.528) 0:02:13.034 ********* 2026-03-24 04:11:59.451468 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:11:59.451476 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:11:59.451481 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:11:59.451486 | orchestrator | 2026-03-24 04:11:59.451502 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-24 04:12:47.095934 | orchestrator | Tuesday 24 March 2026 04:11:59 +0000 (0:00:01.814) 0:02:14.848 ********* 2026-03-24 04:12:47.096087 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:12:47.096116 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:12:47.096166 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:12:47.096187 | orchestrator | 2026-03-24 04:12:47.096211 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-24 04:12:47.096229 | orchestrator | Tuesday 24 March 2026 04:12:01 +0000 (0:00:01.699) 0:02:16.548 ********* 2026-03-24 04:12:47.096245 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:12:47.096257 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:12:47.096270 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:12:47.096282 | orchestrator | 2026-03-24 04:12:47.096301 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-24 04:12:47.096321 | orchestrator | Tuesday 24 March 2026 04:12:02 +0000 (0:00:01.246) 0:02:17.794 ********* 2026-03-24 04:12:47.096340 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:12:47.096360 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:12:47.096379 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:12:47.096395 | orchestrator | 2026-03-24 04:12:47.096413 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-24 04:12:47.096427 | orchestrator | Tuesday 24 March 2026 04:12:04 +0000 (0:00:01.625) 0:02:19.419 ********* 2026-03-24 04:12:47.096440 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:12:47.096453 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:12:47.096467 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:12:47.096480 | orchestrator | 2026-03-24 04:12:47.096492 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-24 04:12:47.096506 | orchestrator | Tuesday 24 March 2026 04:12:05 +0000 (0:00:01.802) 0:02:21.222 ********* 2026-03-24 04:12:47.096520 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:12:47.096565 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:12:47.096579 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:12:47.096591 | orchestrator | 2026-03-24 04:12:47.096604 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-24 04:12:47.096628 | orchestrator | Tuesday 24 March 2026 04:12:07 +0000 (0:00:01.833) 0:02:23.055 ********* 2026-03-24 04:12:47.096642 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:12:47.096656 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:12:47.096671 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:12:47.096684 | orchestrator | 2026-03-24 04:12:47.096698 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-24 04:12:47.096713 | orchestrator | Tuesday 24 March 2026 04:12:09 +0000 (0:00:01.958) 0:02:25.013 ********* 2026-03-24 04:12:47.096728 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:12:47.096745 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:12:47.096760 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:12:47.096776 | orchestrator | 2026-03-24 04:12:47.096792 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-24 04:12:47.096808 | orchestrator | Tuesday 24 March 2026 04:12:10 +0000 (0:00:01.345) 0:02:26.359 ********* 2026-03-24 04:12:47.096825 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:12:47.096839 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:12:47.096851 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:12:47.096867 | orchestrator | 2026-03-24 04:12:47.096882 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-24 04:12:47.096899 | orchestrator | Tuesday 24 March 2026 04:12:12 +0000 (0:00:01.323) 0:02:27.682 ********* 2026-03-24 04:12:47.096916 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:12:47.096932 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:12:47.096950 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:12:47.096966 | orchestrator | 2026-03-24 04:12:47.096981 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-24 04:12:47.096997 | orchestrator | Tuesday 24 March 2026 04:12:13 +0000 (0:00:01.702) 0:02:29.385 ********* 2026-03-24 04:12:47.097012 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:12:47.097027 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:12:47.097039 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:12:47.097052 | orchestrator | 2026-03-24 04:12:47.097070 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-24 04:12:47.097089 | orchestrator | Tuesday 24 March 2026 04:12:15 +0000 (0:00:01.727) 0:02:31.113 ********* 2026-03-24 04:12:47.097106 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-24 04:12:47.097120 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-24 04:12:47.097191 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-24 04:12:47.097207 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-24 04:12:47.097232 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-24 04:12:47.097245 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-24 04:12:47.097260 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-24 04:12:47.097273 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-24 04:12:47.097287 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-24 04:12:47.097301 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-24 04:12:47.097315 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-24 04:12:47.097347 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-24 04:12:47.097377 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-24 04:12:47.097386 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-24 04:12:47.097393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-24 04:12:47.097401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-24 04:12:47.097409 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-24 04:12:47.097417 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-24 04:12:47.097425 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-24 04:12:47.097433 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-24 04:12:47.097441 | orchestrator | 2026-03-24 04:12:47.097449 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-24 04:12:47.097457 | orchestrator | 2026-03-24 04:12:47.097465 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-24 04:12:47.097473 | orchestrator | Tuesday 24 March 2026 04:12:20 +0000 (0:00:04.513) 0:02:35.626 ********* 2026-03-24 04:12:47.097481 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:12:47.097489 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:12:47.097497 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:12:47.097504 | orchestrator | 2026-03-24 04:12:47.097512 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-24 04:12:47.097520 | orchestrator | Tuesday 24 March 2026 04:12:21 +0000 (0:00:01.425) 0:02:37.051 ********* 2026-03-24 04:12:47.097528 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:12:47.097536 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:12:47.097544 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:12:47.097552 | orchestrator | 2026-03-24 04:12:47.097566 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-24 04:12:47.097579 | orchestrator | Tuesday 24 March 2026 04:12:23 +0000 (0:00:01.658) 0:02:38.710 ********* 2026-03-24 04:12:47.097591 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:12:47.097605 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:12:47.097618 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:12:47.097631 | orchestrator | 2026-03-24 04:12:47.097643 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-24 04:12:47.097654 | orchestrator | Tuesday 24 March 2026 04:12:24 +0000 (0:00:01.482) 0:02:40.192 ********* 2026-03-24 04:12:47.097667 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:12:47.097681 | orchestrator | 2026-03-24 04:12:47.097692 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-24 04:12:47.097703 | orchestrator | Tuesday 24 March 2026 04:12:26 +0000 (0:00:01.720) 0:02:41.913 ********* 2026-03-24 04:12:47.097715 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:12:47.097728 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:12:47.097741 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:12:47.097754 | orchestrator | 2026-03-24 04:12:47.097766 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-24 04:12:47.097777 | orchestrator | Tuesday 24 March 2026 04:12:27 +0000 (0:00:01.279) 0:02:43.193 ********* 2026-03-24 04:12:47.097791 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:12:47.097804 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:12:47.097817 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:12:47.097831 | orchestrator | 2026-03-24 04:12:47.097845 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-24 04:12:47.097859 | orchestrator | Tuesday 24 March 2026 04:12:29 +0000 (0:00:01.535) 0:02:44.729 ********* 2026-03-24 04:12:47.097883 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:12:47.097896 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:12:47.097904 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:12:47.097911 | orchestrator | 2026-03-24 04:12:47.097919 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-24 04:12:47.097927 | orchestrator | Tuesday 24 March 2026 04:12:30 +0000 (0:00:01.404) 0:02:46.133 ********* 2026-03-24 04:12:47.097935 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:12:47.097943 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:12:47.097951 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:12:47.097959 | orchestrator | 2026-03-24 04:12:47.097967 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-24 04:12:47.097975 | orchestrator | Tuesday 24 March 2026 04:12:32 +0000 (0:00:01.682) 0:02:47.815 ********* 2026-03-24 04:12:47.097983 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:12:47.097991 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:12:47.097998 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:12:47.098007 | orchestrator | 2026-03-24 04:12:47.098079 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-24 04:12:47.098090 | orchestrator | Tuesday 24 March 2026 04:12:34 +0000 (0:00:02.162) 0:02:49.978 ********* 2026-03-24 04:12:47.098099 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:12:47.098107 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:12:47.098115 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:12:47.098122 | orchestrator | 2026-03-24 04:12:47.098158 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-24 04:12:47.098168 | orchestrator | Tuesday 24 March 2026 04:12:36 +0000 (0:00:02.281) 0:02:52.259 ********* 2026-03-24 04:12:47.098186 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:12:47.098194 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:12:47.098202 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:12:47.098210 | orchestrator | 2026-03-24 04:12:47.098219 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-24 04:12:47.098226 | orchestrator | 2026-03-24 04:12:47.098234 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-24 04:12:47.098242 | orchestrator | Tuesday 24 March 2026 04:12:44 +0000 (0:00:08.073) 0:03:00.333 ********* 2026-03-24 04:12:47.098250 | orchestrator | ok: [testbed-manager] 2026-03-24 04:12:47.098258 | orchestrator | 2026-03-24 04:12:47.098266 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-24 04:12:47.098284 | orchestrator | Tuesday 24 March 2026 04:12:47 +0000 (0:00:02.158) 0:03:02.491 ********* 2026-03-24 04:13:53.857422 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857527 | orchestrator | 2026-03-24 04:13:53.857539 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-24 04:13:53.857546 | orchestrator | Tuesday 24 March 2026 04:12:48 +0000 (0:00:01.450) 0:03:03.942 ********* 2026-03-24 04:13:53.857554 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-24 04:13:53.857561 | orchestrator | 2026-03-24 04:13:53.857567 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-24 04:13:53.857573 | orchestrator | Tuesday 24 March 2026 04:12:50 +0000 (0:00:01.506) 0:03:05.448 ********* 2026-03-24 04:13:53.857580 | orchestrator | changed: [testbed-manager] 2026-03-24 04:13:53.857586 | orchestrator | 2026-03-24 04:13:53.857593 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-24 04:13:53.857599 | orchestrator | Tuesday 24 March 2026 04:12:51 +0000 (0:00:01.881) 0:03:07.329 ********* 2026-03-24 04:13:53.857603 | orchestrator | changed: [testbed-manager] 2026-03-24 04:13:53.857607 | orchestrator | 2026-03-24 04:13:53.857611 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-24 04:13:53.857615 | orchestrator | Tuesday 24 March 2026 04:12:53 +0000 (0:00:01.535) 0:03:08.864 ********* 2026-03-24 04:13:53.857620 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-24 04:13:53.857640 | orchestrator | 2026-03-24 04:13:53.857645 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-24 04:13:53.857649 | orchestrator | Tuesday 24 March 2026 04:12:56 +0000 (0:00:02.820) 0:03:11.684 ********* 2026-03-24 04:13:53.857653 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-24 04:13:53.857656 | orchestrator | 2026-03-24 04:13:53.857660 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-24 04:13:53.857664 | orchestrator | Tuesday 24 March 2026 04:12:58 +0000 (0:00:01.808) 0:03:13.493 ********* 2026-03-24 04:13:53.857674 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857678 | orchestrator | 2026-03-24 04:13:53.857681 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-24 04:13:53.857685 | orchestrator | Tuesday 24 March 2026 04:12:59 +0000 (0:00:01.384) 0:03:14.878 ********* 2026-03-24 04:13:53.857689 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857693 | orchestrator | 2026-03-24 04:13:53.857696 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-24 04:13:53.857701 | orchestrator | 2026-03-24 04:13:53.857704 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-24 04:13:53.857708 | orchestrator | Tuesday 24 March 2026 04:13:01 +0000 (0:00:01.614) 0:03:16.492 ********* 2026-03-24 04:13:53.857712 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857715 | orchestrator | 2026-03-24 04:13:53.857719 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-24 04:13:53.857723 | orchestrator | Tuesday 24 March 2026 04:13:02 +0000 (0:00:01.107) 0:03:17.599 ********* 2026-03-24 04:13:53.857727 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 04:13:53.857731 | orchestrator | 2026-03-24 04:13:53.857735 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-24 04:13:53.857739 | orchestrator | Tuesday 24 March 2026 04:13:03 +0000 (0:00:01.431) 0:03:19.030 ********* 2026-03-24 04:13:53.857742 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857746 | orchestrator | 2026-03-24 04:13:53.857749 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-24 04:13:53.857753 | orchestrator | Tuesday 24 March 2026 04:13:05 +0000 (0:00:01.760) 0:03:20.791 ********* 2026-03-24 04:13:53.857757 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857761 | orchestrator | 2026-03-24 04:13:53.857764 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-24 04:13:53.857768 | orchestrator | Tuesday 24 March 2026 04:13:07 +0000 (0:00:02.506) 0:03:23.298 ********* 2026-03-24 04:13:53.857772 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857775 | orchestrator | 2026-03-24 04:13:53.857779 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-24 04:13:53.857783 | orchestrator | Tuesday 24 March 2026 04:13:09 +0000 (0:00:01.453) 0:03:24.752 ********* 2026-03-24 04:13:53.857786 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857790 | orchestrator | 2026-03-24 04:13:53.857794 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-24 04:13:53.857797 | orchestrator | Tuesday 24 March 2026 04:13:10 +0000 (0:00:01.472) 0:03:26.225 ********* 2026-03-24 04:13:53.857801 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857805 | orchestrator | 2026-03-24 04:13:53.857808 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-24 04:13:53.857812 | orchestrator | Tuesday 24 March 2026 04:13:12 +0000 (0:00:01.547) 0:03:27.772 ********* 2026-03-24 04:13:53.857816 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857820 | orchestrator | 2026-03-24 04:13:53.857823 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-24 04:13:53.857827 | orchestrator | Tuesday 24 March 2026 04:13:14 +0000 (0:00:02.324) 0:03:30.097 ********* 2026-03-24 04:13:53.857831 | orchestrator | ok: [testbed-manager] 2026-03-24 04:13:53.857834 | orchestrator | 2026-03-24 04:13:53.857838 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-24 04:13:53.857846 | orchestrator | 2026-03-24 04:13:53.857850 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-24 04:13:53.857853 | orchestrator | Tuesday 24 March 2026 04:13:16 +0000 (0:00:01.700) 0:03:31.798 ********* 2026-03-24 04:13:53.857857 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:13:53.857861 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:13:53.857865 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:13:53.857870 | orchestrator | 2026-03-24 04:13:53.857874 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-24 04:13:53.857878 | orchestrator | Tuesday 24 March 2026 04:13:17 +0000 (0:00:01.356) 0:03:33.154 ********* 2026-03-24 04:13:53.857883 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:13:53.857887 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:13:53.857891 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:13:53.857896 | orchestrator | 2026-03-24 04:13:53.857910 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-24 04:13:53.857914 | orchestrator | Tuesday 24 March 2026 04:13:19 +0000 (0:00:01.524) 0:03:34.679 ********* 2026-03-24 04:13:53.857919 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:13:53.857923 | orchestrator | 2026-03-24 04:13:53.857927 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-24 04:13:53.857932 | orchestrator | Tuesday 24 March 2026 04:13:20 +0000 (0:00:01.709) 0:03:36.389 ********* 2026-03-24 04:13:53.857936 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.857940 | orchestrator | 2026-03-24 04:13:53.857944 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-24 04:13:53.857948 | orchestrator | Tuesday 24 March 2026 04:13:22 +0000 (0:00:01.821) 0:03:38.210 ********* 2026-03-24 04:13:53.857953 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.857957 | orchestrator | 2026-03-24 04:13:53.857961 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-24 04:13:53.857966 | orchestrator | Tuesday 24 March 2026 04:13:24 +0000 (0:00:01.805) 0:03:40.015 ********* 2026-03-24 04:13:53.857970 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:13:53.857974 | orchestrator | 2026-03-24 04:13:53.857978 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-24 04:13:53.857983 | orchestrator | Tuesday 24 March 2026 04:13:25 +0000 (0:00:01.123) 0:03:41.139 ********* 2026-03-24 04:13:53.857987 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.857991 | orchestrator | 2026-03-24 04:13:53.857995 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-24 04:13:53.858000 | orchestrator | Tuesday 24 March 2026 04:13:27 +0000 (0:00:01.939) 0:03:43.079 ********* 2026-03-24 04:13:53.858004 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.858008 | orchestrator | 2026-03-24 04:13:53.858013 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-24 04:13:53.858072 | orchestrator | Tuesday 24 March 2026 04:13:29 +0000 (0:00:02.033) 0:03:45.112 ********* 2026-03-24 04:13:53.858076 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.858080 | orchestrator | 2026-03-24 04:13:53.858085 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-24 04:13:53.858089 | orchestrator | Tuesday 24 March 2026 04:13:30 +0000 (0:00:01.148) 0:03:46.261 ********* 2026-03-24 04:13:53.858094 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.858098 | orchestrator | 2026-03-24 04:13:53.858102 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-24 04:13:53.858107 | orchestrator | Tuesday 24 March 2026 04:13:31 +0000 (0:00:01.112) 0:03:47.373 ********* 2026-03-24 04:13:53.858111 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-24 04:13:53.858115 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-24 04:13:53.858121 | orchestrator | } 2026-03-24 04:13:53.858128 | orchestrator | 2026-03-24 04:13:53.858133 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-24 04:13:53.858137 | orchestrator | Tuesday 24 March 2026 04:13:33 +0000 (0:00:01.099) 0:03:48.473 ********* 2026-03-24 04:13:53.858141 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:13:53.858145 | orchestrator | 2026-03-24 04:13:53.858149 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-24 04:13:53.858153 | orchestrator | Tuesday 24 March 2026 04:13:34 +0000 (0:00:01.122) 0:03:49.596 ********* 2026-03-24 04:13:53.858158 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-24 04:13:53.858162 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-24 04:13:53.858166 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-24 04:13:53.858171 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-24 04:13:53.858175 | orchestrator | 2026-03-24 04:13:53.858179 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-24 04:13:53.858184 | orchestrator | Tuesday 24 March 2026 04:13:39 +0000 (0:00:05.356) 0:03:54.953 ********* 2026-03-24 04:13:53.858191 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.858197 | orchestrator | 2026-03-24 04:13:53.858202 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-24 04:13:53.858223 | orchestrator | Tuesday 24 March 2026 04:13:41 +0000 (0:00:02.395) 0:03:57.349 ********* 2026-03-24 04:13:53.858230 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.858236 | orchestrator | 2026-03-24 04:13:53.858241 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-24 04:13:53.858247 | orchestrator | Tuesday 24 March 2026 04:13:44 +0000 (0:00:02.508) 0:03:59.857 ********* 2026-03-24 04:13:53.858253 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-24 04:13:53.858259 | orchestrator | 2026-03-24 04:13:53.858264 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-24 04:13:53.858270 | orchestrator | Tuesday 24 March 2026 04:13:48 +0000 (0:00:04.170) 0:04:04.027 ********* 2026-03-24 04:13:53.858276 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:13:53.858282 | orchestrator | 2026-03-24 04:13:53.858288 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-24 04:13:53.858294 | orchestrator | Tuesday 24 March 2026 04:13:49 +0000 (0:00:01.108) 0:04:05.136 ********* 2026-03-24 04:13:53.858300 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-24 04:13:53.858306 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-24 04:13:53.858313 | orchestrator | 2026-03-24 04:13:53.858320 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-24 04:13:53.858329 | orchestrator | Tuesday 24 March 2026 04:13:52 +0000 (0:00:02.749) 0:04:07.886 ********* 2026-03-24 04:13:53.858333 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:13:53.858342 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:14:18.030322 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:14:18.030412 | orchestrator | 2026-03-24 04:14:18.030420 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-24 04:14:18.030427 | orchestrator | Tuesday 24 March 2026 04:13:53 +0000 (0:00:01.367) 0:04:09.253 ********* 2026-03-24 04:14:18.030432 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:14:18.030437 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:14:18.030441 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:14:18.030446 | orchestrator | 2026-03-24 04:14:18.030450 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-24 04:14:18.030454 | orchestrator | 2026-03-24 04:14:18.030459 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-24 04:14:18.030463 | orchestrator | Tuesday 24 March 2026 04:13:56 +0000 (0:00:02.174) 0:04:11.428 ********* 2026-03-24 04:14:18.030468 | orchestrator | ok: [testbed-manager] 2026-03-24 04:14:18.030489 | orchestrator | 2026-03-24 04:14:18.030494 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-24 04:14:18.030498 | orchestrator | Tuesday 24 March 2026 04:13:57 +0000 (0:00:01.099) 0:04:12.528 ********* 2026-03-24 04:14:18.030502 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-24 04:14:18.030507 | orchestrator | 2026-03-24 04:14:18.030512 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-24 04:14:18.030516 | orchestrator | Tuesday 24 March 2026 04:13:58 +0000 (0:00:01.445) 0:04:13.973 ********* 2026-03-24 04:14:18.030520 | orchestrator | ok: [testbed-manager] 2026-03-24 04:14:18.030524 | orchestrator | 2026-03-24 04:14:18.030529 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-24 04:14:18.030533 | orchestrator | 2026-03-24 04:14:18.030537 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-24 04:14:18.030553 | orchestrator | Tuesday 24 March 2026 04:14:04 +0000 (0:00:05.559) 0:04:19.532 ********* 2026-03-24 04:14:18.030557 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:14:18.030561 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:14:18.030565 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:14:18.030569 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:14:18.030573 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:14:18.030578 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:14:18.030582 | orchestrator | 2026-03-24 04:14:18.030586 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-24 04:14:18.030590 | orchestrator | Tuesday 24 March 2026 04:14:05 +0000 (0:00:01.741) 0:04:21.273 ********* 2026-03-24 04:14:18.030594 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-24 04:14:18.030598 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-24 04:14:18.030603 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-24 04:14:18.030607 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-24 04:14:18.030611 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-24 04:14:18.030615 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-24 04:14:18.030619 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-24 04:14:18.030623 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-24 04:14:18.030627 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-24 04:14:18.030632 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-24 04:14:18.030636 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-24 04:14:18.030640 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-24 04:14:18.030644 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-24 04:14:18.030648 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-24 04:14:18.030652 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-24 04:14:18.030656 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-24 04:14:18.030660 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-24 04:14:18.030664 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-24 04:14:18.030668 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-24 04:14:18.030672 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-24 04:14:18.030681 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-24 04:14:18.030685 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-24 04:14:18.030689 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-24 04:14:18.030693 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-24 04:14:18.030697 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-24 04:14:18.030701 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-24 04:14:18.030718 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-24 04:14:18.030722 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-24 04:14:18.030726 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-24 04:14:18.030730 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-24 04:14:18.030734 | orchestrator | 2026-03-24 04:14:18.030739 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-24 04:14:18.030743 | orchestrator | Tuesday 24 March 2026 04:14:13 +0000 (0:00:08.004) 0:04:29.278 ********* 2026-03-24 04:14:18.030747 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:14:18.030751 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:14:18.030755 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:14:18.030759 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:14:18.030763 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:14:18.030767 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:14:18.030771 | orchestrator | 2026-03-24 04:14:18.030776 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-24 04:14:18.030780 | orchestrator | Tuesday 24 March 2026 04:14:15 +0000 (0:00:01.801) 0:04:31.079 ********* 2026-03-24 04:14:18.030784 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:14:18.030788 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:14:18.030792 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:14:18.030796 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:14:18.030800 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:14:18.030804 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:14:18.030808 | orchestrator | 2026-03-24 04:14:18.030812 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:14:18.030820 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 04:14:18.030827 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-24 04:14:18.030832 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-24 04:14:18.030837 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-24 04:14:18.030842 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 04:14:18.030846 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 04:14:18.030851 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-24 04:14:18.030855 | orchestrator | 2026-03-24 04:14:18.030860 | orchestrator | 2026-03-24 04:14:18.030865 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:14:18.030874 | orchestrator | Tuesday 24 March 2026 04:14:18 +0000 (0:00:02.330) 0:04:33.411 ********* 2026-03-24 04:14:18.030879 | orchestrator | =============================================================================== 2026-03-24 04:14:18.030884 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.51s 2026-03-24 04:14:18.030889 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.38s 2026-03-24 04:14:18.030894 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.07s 2026-03-24 04:14:18.030899 | orchestrator | Manage labels ----------------------------------------------------------- 8.00s 2026-03-24 04:14:18.030903 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.56s 2026-03-24 04:14:18.030908 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.36s 2026-03-24 04:14:18.030913 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.51s 2026-03-24 04:14:18.030918 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.17s 2026-03-24 04:14:18.030923 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.09s 2026-03-24 04:14:18.030928 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.09s 2026-03-24 04:14:18.030932 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.95s 2026-03-24 04:14:18.030937 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.82s 2026-03-24 04:14:18.030942 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.75s 2026-03-24 04:14:18.030947 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.57s 2026-03-24 04:14:18.030952 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.51s 2026-03-24 04:14:18.030956 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.51s 2026-03-24 04:14:18.030961 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.50s 2026-03-24 04:14:18.030966 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.43s 2026-03-24 04:14:18.030974 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.40s 2026-03-24 04:14:18.450051 | orchestrator | Manage taints ----------------------------------------------------------- 2.33s 2026-03-24 04:14:18.723633 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-24 04:14:18.723727 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-24 04:14:18.729680 | orchestrator | + set -e 2026-03-24 04:14:18.729754 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 04:14:18.729769 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 04:14:18.729782 | orchestrator | ++ INTERACTIVE=false 2026-03-24 04:14:18.729794 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 04:14:18.729805 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 04:14:18.729816 | orchestrator | + osism apply openstackclient 2026-03-24 04:14:30.910311 | orchestrator | 2026-03-24 04:14:30 | INFO  | Task e9e46d63-0b7f-4813-a308-0118e2fb0a28 (openstackclient) was prepared for execution. 2026-03-24 04:14:30.910389 | orchestrator | 2026-03-24 04:14:30 | INFO  | It takes a moment until task e9e46d63-0b7f-4813-a308-0118e2fb0a28 (openstackclient) has been started and output is visible here. 2026-03-24 04:15:04.409186 | orchestrator | 2026-03-24 04:15:04.409367 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-24 04:15:04.409398 | orchestrator | 2026-03-24 04:15:04.409423 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-24 04:15:04.409451 | orchestrator | Tuesday 24 March 2026 04:14:37 +0000 (0:00:02.072) 0:00:02.072 ********* 2026-03-24 04:15:04.409472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-24 04:15:04.409532 | orchestrator | 2026-03-24 04:15:04.409551 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-24 04:15:04.409569 | orchestrator | Tuesday 24 March 2026 04:14:39 +0000 (0:00:01.839) 0:00:03.911 ********* 2026-03-24 04:15:04.409588 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-24 04:15:04.409632 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-24 04:15:04.409653 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-24 04:15:04.409674 | orchestrator | 2026-03-24 04:15:04.409693 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-24 04:15:04.409714 | orchestrator | Tuesday 24 March 2026 04:14:41 +0000 (0:00:02.209) 0:00:06.120 ********* 2026-03-24 04:15:04.409735 | orchestrator | changed: [testbed-manager] 2026-03-24 04:15:04.409754 | orchestrator | 2026-03-24 04:15:04.409767 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-24 04:15:04.409779 | orchestrator | Tuesday 24 March 2026 04:14:43 +0000 (0:00:02.114) 0:00:08.235 ********* 2026-03-24 04:15:04.409793 | orchestrator | ok: [testbed-manager] 2026-03-24 04:15:04.409806 | orchestrator | 2026-03-24 04:15:04.409819 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-24 04:15:04.409832 | orchestrator | Tuesday 24 March 2026 04:14:45 +0000 (0:00:01.970) 0:00:10.205 ********* 2026-03-24 04:15:04.409846 | orchestrator | ok: [testbed-manager] 2026-03-24 04:15:04.409858 | orchestrator | 2026-03-24 04:15:04.409871 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-24 04:15:04.409884 | orchestrator | Tuesday 24 March 2026 04:14:47 +0000 (0:00:01.827) 0:00:12.033 ********* 2026-03-24 04:15:04.409896 | orchestrator | ok: [testbed-manager] 2026-03-24 04:15:04.409907 | orchestrator | 2026-03-24 04:15:04.409918 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-24 04:15:04.409929 | orchestrator | Tuesday 24 March 2026 04:14:48 +0000 (0:00:01.393) 0:00:13.427 ********* 2026-03-24 04:15:04.409941 | orchestrator | changed: [testbed-manager] 2026-03-24 04:15:04.409952 | orchestrator | 2026-03-24 04:15:04.409963 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-24 04:15:04.409974 | orchestrator | Tuesday 24 March 2026 04:14:58 +0000 (0:00:09.987) 0:00:23.414 ********* 2026-03-24 04:15:04.409985 | orchestrator | changed: [testbed-manager] 2026-03-24 04:15:04.409996 | orchestrator | 2026-03-24 04:15:04.410095 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-24 04:15:04.410124 | orchestrator | Tuesday 24 March 2026 04:15:00 +0000 (0:00:01.863) 0:00:25.277 ********* 2026-03-24 04:15:04.410142 | orchestrator | changed: [testbed-manager] 2026-03-24 04:15:04.410159 | orchestrator | 2026-03-24 04:15:04.410178 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-24 04:15:04.410197 | orchestrator | Tuesday 24 March 2026 04:15:02 +0000 (0:00:01.485) 0:00:26.763 ********* 2026-03-24 04:15:04.410214 | orchestrator | ok: [testbed-manager] 2026-03-24 04:15:04.410234 | orchestrator | 2026-03-24 04:15:04.410253 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:15:04.410272 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-24 04:15:04.410368 | orchestrator | 2026-03-24 04:15:04.410391 | orchestrator | 2026-03-24 04:15:04.410410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:15:04.410429 | orchestrator | Tuesday 24 March 2026 04:15:04 +0000 (0:00:01.937) 0:00:28.700 ********* 2026-03-24 04:15:04.410449 | orchestrator | =============================================================================== 2026-03-24 04:15:04.410467 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 9.99s 2026-03-24 04:15:04.410486 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.21s 2026-03-24 04:15:04.410497 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.11s 2026-03-24 04:15:04.410523 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.97s 2026-03-24 04:15:04.410535 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.94s 2026-03-24 04:15:04.410546 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.86s 2026-03-24 04:15:04.410557 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.84s 2026-03-24 04:15:04.410568 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.83s 2026-03-24 04:15:04.410587 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.49s 2026-03-24 04:15:04.410614 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.39s 2026-03-24 04:15:04.718952 | orchestrator | + osism apply -a upgrade common 2026-03-24 04:15:06.645056 | orchestrator | 2026-03-24 04:15:06 | INFO  | Task 2ad86dd7-6f3a-4dc4-880c-894a0cbfd6d0 (common) was prepared for execution. 2026-03-24 04:15:06.645176 | orchestrator | 2026-03-24 04:15:06 | INFO  | It takes a moment until task 2ad86dd7-6f3a-4dc4-880c-894a0cbfd6d0 (common) has been started and output is visible here. 2026-03-24 04:15:24.289072 | orchestrator | 2026-03-24 04:15:24.289170 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-24 04:15:24.289181 | orchestrator | 2026-03-24 04:15:24.289188 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-24 04:15:24.289195 | orchestrator | Tuesday 24 March 2026 04:15:12 +0000 (0:00:02.067) 0:00:02.067 ********* 2026-03-24 04:15:24.289202 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:15:24.289210 | orchestrator | 2026-03-24 04:15:24.289216 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-24 04:15:24.289222 | orchestrator | Tuesday 24 March 2026 04:15:15 +0000 (0:00:03.185) 0:00:05.252 ********* 2026-03-24 04:15:24.289243 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:15:24.289251 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:15:24.289258 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:15:24.289266 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:15:24.289276 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:15:24.289284 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:15:24.289292 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:15:24.289299 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:15:24.289384 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:15:24.289391 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:15:24.289398 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:15:24.289404 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:15:24.289410 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:15:24.289416 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:15:24.289423 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:15:24.289429 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:15:24.289436 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:15:24.289443 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:15:24.289471 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:15:24.289478 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:15:24.289485 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:15:24.289491 | orchestrator | 2026-03-24 04:15:24.289497 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-24 04:15:24.289503 | orchestrator | Tuesday 24 March 2026 04:15:19 +0000 (0:00:03.475) 0:00:08.728 ********* 2026-03-24 04:15:24.289509 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:15:24.289517 | orchestrator | 2026-03-24 04:15:24.289523 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-24 04:15:24.289529 | orchestrator | Tuesday 24 March 2026 04:15:21 +0000 (0:00:02.601) 0:00:11.329 ********* 2026-03-24 04:15:24.289539 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:24.289553 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:24.289581 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:24.289589 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:24.289595 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:24.289601 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:24.289615 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:24.289622 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:24.289754 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:24.289782 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032020 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032139 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032166 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032202 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032215 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032226 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032236 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032267 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032274 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032280 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032296 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:27.032302 | orchestrator | 2026-03-24 04:15:27.032373 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-24 04:15:27.032380 | orchestrator | Tuesday 24 March 2026 04:15:26 +0000 (0:00:04.453) 0:00:15.783 ********* 2026-03-24 04:15:27.032389 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:27.032397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:27.032403 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:27.032409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:27.032424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:28.925151 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:15:28.925163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:28.925191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:28.925245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925288 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:15:28.925296 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:15:28.925303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925402 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:15:28.925416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925425 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:15:28.925436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:28.925446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925457 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:28.925466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:28.925474 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:15:28.925490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232237 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:15:30.232257 | orchestrator | 2026-03-24 04:15:30.232270 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-24 04:15:30.232283 | orchestrator | Tuesday 24 March 2026 04:15:28 +0000 (0:00:02.679) 0:00:18.462 ********* 2026-03-24 04:15:30.232361 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:30.232374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:30.232386 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:30.232441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232489 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:15:30.232500 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:30.232526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:30.232539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:30.232564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:44.702406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:44.702496 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:15:44.702505 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:15:44.702511 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:15:44.702517 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:15:44.702525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:44.702548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:44.702554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:44.702560 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:15:44.702566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:15:44.702573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:44.702595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:15:44.702605 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:15:44.702614 | orchestrator | 2026-03-24 04:15:44.702624 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-24 04:15:44.702634 | orchestrator | Tuesday 24 March 2026 04:15:32 +0000 (0:00:03.243) 0:00:21.706 ********* 2026-03-24 04:15:44.702643 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:15:44.702666 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:15:44.702675 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:15:44.702684 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:15:44.702692 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:15:44.702702 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:15:44.702707 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:15:44.702713 | orchestrator | 2026-03-24 04:15:44.702718 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-24 04:15:44.702724 | orchestrator | Tuesday 24 March 2026 04:15:34 +0000 (0:00:02.411) 0:00:24.118 ********* 2026-03-24 04:15:44.702729 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:15:44.702735 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:15:44.702740 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:15:44.702746 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:15:44.702751 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:15:44.702756 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:15:44.702761 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:15:44.702766 | orchestrator | 2026-03-24 04:15:44.702772 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-24 04:15:44.702777 | orchestrator | Tuesday 24 March 2026 04:15:36 +0000 (0:00:02.070) 0:00:26.188 ********* 2026-03-24 04:15:44.702782 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:15:44.702788 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:15:44.702793 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:15:44.702798 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:15:44.702807 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:15:44.702813 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:15:44.702818 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:15:44.702823 | orchestrator | 2026-03-24 04:15:44.702828 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-24 04:15:44.702834 | orchestrator | Tuesday 24 March 2026 04:15:38 +0000 (0:00:01.973) 0:00:28.161 ********* 2026-03-24 04:15:44.702839 | orchestrator | changed: [testbed-manager] 2026-03-24 04:15:44.702844 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:15:44.702849 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:15:44.702855 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:15:44.702860 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:15:44.702865 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:15:44.702870 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:15:44.702875 | orchestrator | 2026-03-24 04:15:44.702881 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-24 04:15:44.702886 | orchestrator | Tuesday 24 March 2026 04:15:41 +0000 (0:00:02.910) 0:00:31.072 ********* 2026-03-24 04:15:44.702899 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:44.702906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:44.702912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:44.702918 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:44.702935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:46.400777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:46.400863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:46.400885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:15:46.400890 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400955 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:15:46.400963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:03.518237 | orchestrator | 2026-03-24 04:16:03.518442 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-24 04:16:03.518472 | orchestrator | Tuesday 24 March 2026 04:15:46 +0000 (0:00:04.870) 0:00:35.942 ********* 2026-03-24 04:16:03.518490 | orchestrator | [WARNING]: Skipped 2026-03-24 04:16:03.518509 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-24 04:16:03.518526 | orchestrator | to this access issue: 2026-03-24 04:16:03.518590 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-24 04:16:03.518602 | orchestrator | directory 2026-03-24 04:16:03.518612 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:16:03.518623 | orchestrator | 2026-03-24 04:16:03.518651 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-24 04:16:03.518662 | orchestrator | Tuesday 24 March 2026 04:15:48 +0000 (0:00:02.185) 0:00:38.127 ********* 2026-03-24 04:16:03.518672 | orchestrator | [WARNING]: Skipped 2026-03-24 04:16:03.518682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-24 04:16:03.518691 | orchestrator | to this access issue: 2026-03-24 04:16:03.518701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-24 04:16:03.518711 | orchestrator | directory 2026-03-24 04:16:03.518720 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:16:03.518730 | orchestrator | 2026-03-24 04:16:03.518739 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-24 04:16:03.518749 | orchestrator | Tuesday 24 March 2026 04:15:50 +0000 (0:00:01.688) 0:00:39.816 ********* 2026-03-24 04:16:03.518759 | orchestrator | [WARNING]: Skipped 2026-03-24 04:16:03.518769 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-24 04:16:03.518781 | orchestrator | to this access issue: 2026-03-24 04:16:03.518792 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-24 04:16:03.518803 | orchestrator | directory 2026-03-24 04:16:03.518814 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:16:03.518825 | orchestrator | 2026-03-24 04:16:03.518836 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-24 04:16:03.518847 | orchestrator | Tuesday 24 March 2026 04:15:51 +0000 (0:00:01.668) 0:00:41.484 ********* 2026-03-24 04:16:03.518858 | orchestrator | [WARNING]: Skipped 2026-03-24 04:16:03.518869 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-24 04:16:03.518880 | orchestrator | to this access issue: 2026-03-24 04:16:03.518891 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-24 04:16:03.518904 | orchestrator | directory 2026-03-24 04:16:03.518915 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:16:03.518926 | orchestrator | 2026-03-24 04:16:03.518937 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-24 04:16:03.518949 | orchestrator | Tuesday 24 March 2026 04:15:53 +0000 (0:00:01.459) 0:00:42.943 ********* 2026-03-24 04:16:03.518960 | orchestrator | changed: [testbed-manager] 2026-03-24 04:16:03.518971 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:16:03.518982 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:16:03.518993 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:16:03.519004 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:16:03.519015 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:16:03.519026 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:16:03.519037 | orchestrator | 2026-03-24 04:16:03.519049 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-24 04:16:03.519060 | orchestrator | Tuesday 24 March 2026 04:15:56 +0000 (0:00:03.287) 0:00:46.230 ********* 2026-03-24 04:16:03.519072 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:16:03.519084 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:16:03.519096 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:16:03.519107 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:16:03.519118 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:16:03.519139 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:16:03.519150 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:16:03.519159 | orchestrator | 2026-03-24 04:16:03.519169 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-24 04:16:03.519178 | orchestrator | Tuesday 24 March 2026 04:15:59 +0000 (0:00:02.792) 0:00:49.023 ********* 2026-03-24 04:16:03.519188 | orchestrator | ok: [testbed-manager] 2026-03-24 04:16:03.519197 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:16:03.519207 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:16:03.519216 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:16:03.519226 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:16:03.519235 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:16:03.519245 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:16:03.519254 | orchestrator | 2026-03-24 04:16:03.519264 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-24 04:16:03.519273 | orchestrator | Tuesday 24 March 2026 04:16:02 +0000 (0:00:02.673) 0:00:51.697 ********* 2026-03-24 04:16:03.519306 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:03.519326 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:03.519398 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:03.519413 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:03.519424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:03.519442 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:03.519452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:03.519470 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:11.305580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:11.305677 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:11.305687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:11.305694 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:11.305719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:11.305727 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:11.305735 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:11.305758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:11.305765 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:11.305772 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:11.305778 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:11.305784 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:11.305795 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:11.305802 | orchestrator | 2026-03-24 04:16:11.305810 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-24 04:16:11.305818 | orchestrator | Tuesday 24 March 2026 04:16:04 +0000 (0:00:02.720) 0:00:54.418 ********* 2026-03-24 04:16:11.305824 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:16:11.305831 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:16:11.305837 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:16:11.305843 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:16:11.305848 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:16:11.305854 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:16:11.305860 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:16:11.305866 | orchestrator | 2026-03-24 04:16:11.305872 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-24 04:16:11.305877 | orchestrator | Tuesday 24 March 2026 04:16:07 +0000 (0:00:02.964) 0:00:57.383 ********* 2026-03-24 04:16:11.305883 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:16:11.305889 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:16:11.305895 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:16:11.305901 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:16:11.305907 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:16:11.305916 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:16:13.917292 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:16:13.917392 | orchestrator | 2026-03-24 04:16:13.917406 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-24 04:16:13.917415 | orchestrator | Tuesday 24 March 2026 04:16:11 +0000 (0:00:03.467) 0:01:00.850 ********* 2026-03-24 04:16:13.917426 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:13.917437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:13.917480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:13.917490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:13.917498 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:13.917506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:13.917513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:13.917537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:13.917547 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:13.917556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:13.917570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:13.917578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:13.917585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:13.917598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:18.444401 | orchestrator | 2026-03-24 04:16:18.444411 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-24 04:16:18.444422 | orchestrator | Tuesday 24 March 2026 04:16:15 +0000 (0:00:04.595) 0:01:05.446 ********* 2026-03-24 04:16:18.444433 | orchestrator | changed: [testbed-manager] => { 2026-03-24 04:16:18.444446 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:16:18.444456 | orchestrator | } 2026-03-24 04:16:18.444464 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:16:18.444473 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:16:18.444481 | orchestrator | } 2026-03-24 04:16:18.444490 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:16:18.444499 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:16:18.444508 | orchestrator | } 2026-03-24 04:16:18.444517 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:16:18.444527 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:16:18.444535 | orchestrator | } 2026-03-24 04:16:18.444544 | orchestrator | changed: [testbed-node-3] => { 2026-03-24 04:16:18.444552 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:16:18.444561 | orchestrator | } 2026-03-24 04:16:18.444571 | orchestrator | changed: [testbed-node-4] => { 2026-03-24 04:16:18.444579 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:16:18.444588 | orchestrator | } 2026-03-24 04:16:18.444597 | orchestrator | changed: [testbed-node-5] => { 2026-03-24 04:16:18.444605 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:16:18.444621 | orchestrator | } 2026-03-24 04:16:18.444631 | orchestrator | 2026-03-24 04:16:18.444655 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:16:18.444664 | orchestrator | Tuesday 24 March 2026 04:16:17 +0000 (0:00:02.040) 0:01:07.487 ********* 2026-03-24 04:16:18.444679 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:18.444691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:18.444701 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:18.444711 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:16:18.444721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:18.444730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:18.444741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:18.444750 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:16:18.444760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:18.444791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582232 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:16:24.582249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:24.582263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582284 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:16:24.582295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:24.582306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:24.582478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582499 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:16:24.582509 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:16:24.582519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:24.582530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:24.582558 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:16:24.582569 | orchestrator | 2026-03-24 04:16:24.582580 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:16:24.582593 | orchestrator | Tuesday 24 March 2026 04:16:20 +0000 (0:00:02.915) 0:01:10.402 ********* 2026-03-24 04:16:24.582604 | orchestrator | 2026-03-24 04:16:24.582615 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:16:24.582626 | orchestrator | Tuesday 24 March 2026 04:16:21 +0000 (0:00:00.424) 0:01:10.827 ********* 2026-03-24 04:16:24.582636 | orchestrator | 2026-03-24 04:16:24.582646 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:16:24.582657 | orchestrator | Tuesday 24 March 2026 04:16:21 +0000 (0:00:00.437) 0:01:11.264 ********* 2026-03-24 04:16:24.582667 | orchestrator | 2026-03-24 04:16:24.582678 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:16:24.582689 | orchestrator | Tuesday 24 March 2026 04:16:22 +0000 (0:00:00.424) 0:01:11.689 ********* 2026-03-24 04:16:24.582700 | orchestrator | 2026-03-24 04:16:24.582711 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:16:24.582722 | orchestrator | Tuesday 24 March 2026 04:16:22 +0000 (0:00:00.419) 0:01:12.108 ********* 2026-03-24 04:16:24.582733 | orchestrator | 2026-03-24 04:16:24.582744 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:16:24.582755 | orchestrator | Tuesday 24 March 2026 04:16:23 +0000 (0:00:00.769) 0:01:12.878 ********* 2026-03-24 04:16:24.582766 | orchestrator | 2026-03-24 04:16:24.582782 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:16:24.582793 | orchestrator | Tuesday 24 March 2026 04:16:23 +0000 (0:00:00.421) 0:01:13.300 ********* 2026-03-24 04:16:24.582804 | orchestrator | 2026-03-24 04:16:24.582822 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-24 04:16:27.350590 | orchestrator | Tuesday 24 March 2026 04:16:24 +0000 (0:00:00.807) 0:01:14.107 ********* 2026-03-24 04:16:27.350706 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_nbq8sp65/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_nbq8sp65/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_nbq8sp65/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-24 04:16:27.350789 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_yxeryoa_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_yxeryoa_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_yxeryoa_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-24 04:16:27.350807 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_z5byvwlg/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_z5byvwlg/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_z5byvwlg/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-24 04:16:27.350842 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_3_n74_bn/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_3_n74_bn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_3_n74_bn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-24 04:16:30.490448 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xtzeyw5l/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xtzeyw5l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xtzeyw5l/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-24 04:16:30.490616 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_n7r9eh7o/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_n7r9eh7o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_n7r9eh7o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-24 04:16:30.490635 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_goul_mjt/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_goul_mjt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_goul_mjt/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-24 04:16:30.490655 | orchestrator | 2026-03-24 04:16:30.490667 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:16:30.490679 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-24 04:16:30.490691 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-24 04:16:30.490706 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-24 04:16:30.490723 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-24 04:16:30.490740 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-24 04:16:30.490756 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-24 04:16:30.490772 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-24 04:16:30.490789 | orchestrator | 2026-03-24 04:16:30.490804 | orchestrator | 2026-03-24 04:16:30.490829 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:16:30.807869 | orchestrator | 2026-03-24 04:16:30 | INFO  | Task 6a05d470-5fd1-4fe2-9f87-0aa595515705 (common) was prepared for execution. 2026-03-24 04:16:30.807981 | orchestrator | 2026-03-24 04:16:30 | INFO  | It takes a moment until task 6a05d470-5fd1-4fe2-9f87-0aa595515705 (common) has been started and output is visible here. 2026-03-24 04:16:43.863108 | orchestrator | Tuesday 24 March 2026 04:16:30 +0000 (0:00:05.929) 0:01:20.037 ********* 2026-03-24 04:16:43.863219 | orchestrator | =============================================================================== 2026-03-24 04:16:43.863234 | orchestrator | common : Restart fluentd container -------------------------------------- 5.93s 2026-03-24 04:16:43.863244 | orchestrator | common : Copying over config.json files for services -------------------- 4.87s 2026-03-24 04:16:43.863255 | orchestrator | service-check-containers : common | Check containers -------------------- 4.60s 2026-03-24 04:16:43.863264 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.45s 2026-03-24 04:16:43.863274 | orchestrator | common : Flush handlers ------------------------------------------------- 3.70s 2026-03-24 04:16:43.863284 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.48s 2026-03-24 04:16:43.863294 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.47s 2026-03-24 04:16:43.863303 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.29s 2026-03-24 04:16:43.863313 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.24s 2026-03-24 04:16:43.863323 | orchestrator | common : include_tasks -------------------------------------------------- 3.19s 2026-03-24 04:16:43.863332 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.96s 2026-03-24 04:16:43.863342 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.92s 2026-03-24 04:16:43.863351 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.91s 2026-03-24 04:16:43.863453 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.79s 2026-03-24 04:16:43.863468 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.72s 2026-03-24 04:16:43.863479 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.68s 2026-03-24 04:16:43.863489 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.67s 2026-03-24 04:16:43.863499 | orchestrator | common : include_tasks -------------------------------------------------- 2.60s 2026-03-24 04:16:43.863509 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.41s 2026-03-24 04:16:43.863519 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.19s 2026-03-24 04:16:43.863529 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-24 04:16:43.863543 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-24 04:16:43.863577 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-24 04:16:43.863593 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-24 04:16:43.863629 | orchestrator | 2026-03-24 04:16:43.863649 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-24 04:16:43.863667 | orchestrator | 2026-03-24 04:16:43.863678 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-24 04:16:43.863687 | orchestrator | Tuesday 24 March 2026 04:16:35 +0000 (0:00:01.498) 0:00:01.498 ********* 2026-03-24 04:16:43.863705 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:16:43.863716 | orchestrator | 2026-03-24 04:16:43.863726 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-24 04:16:43.863736 | orchestrator | Tuesday 24 March 2026 04:16:37 +0000 (0:00:01.773) 0:00:03.272 ********* 2026-03-24 04:16:43.863746 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:16:43.863781 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:16:43.863792 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:16:43.863801 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:16:43.863811 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:16:43.863820 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:16:43.863830 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:16:43.863839 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-24 04:16:43.863849 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:16:43.863858 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:16:43.863868 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:16:43.863877 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:16:43.863887 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:16:43.863896 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:16:43.863906 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-24 04:16:43.863935 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:16:43.863946 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:16:43.863955 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:16:43.863965 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:16:43.863975 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:16:43.863984 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-24 04:16:43.863994 | orchestrator | 2026-03-24 04:16:43.864004 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-24 04:16:43.864013 | orchestrator | Tuesday 24 March 2026 04:16:39 +0000 (0:00:02.097) 0:00:05.370 ********* 2026-03-24 04:16:43.864023 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:16:43.864035 | orchestrator | 2026-03-24 04:16:43.864044 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-24 04:16:43.864054 | orchestrator | Tuesday 24 March 2026 04:16:41 +0000 (0:00:01.879) 0:00:07.249 ********* 2026-03-24 04:16:43.864066 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:43.864081 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:43.864105 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:43.864116 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:43.864127 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:43.864152 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:45.604265 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604334 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:45.604341 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604370 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604410 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604417 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604436 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604441 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604446 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604450 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604459 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604467 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604472 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604476 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604481 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:45.604486 | orchestrator | 2026-03-24 04:16:45.604491 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-24 04:16:45.604497 | orchestrator | Tuesday 24 March 2026 04:16:44 +0000 (0:00:03.263) 0:00:10.513 ********* 2026-03-24 04:16:45.604506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:46.434207 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:46.434316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:46.434428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434461 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:46.434535 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:16:46.434550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:46.434568 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:16:46.434576 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:16:46.434585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434602 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:16:46.434610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:46.434626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:47.375548 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:16:47.375809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.375909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:47.375935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.375957 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:16:47.375982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.376002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.376021 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:16:47.376040 | orchestrator | 2026-03-24 04:16:47.376064 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-24 04:16:47.376088 | orchestrator | Tuesday 24 March 2026 04:16:46 +0000 (0:00:01.591) 0:00:12.105 ********* 2026-03-24 04:16:47.376130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:47.376167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:47.376232 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.376254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.376285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:47.376308 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.376327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.376347 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:16:47.376366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:47.376427 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:16:47.376447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:47.376506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888611 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:16:53.888727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:53.888744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888831 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:16:53.888844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888854 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:16:53.888863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:53.888895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:16:53.888921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888941 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:16:53.888963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:16:53.888982 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:16:53.888991 | orchestrator | 2026-03-24 04:16:53.889001 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-24 04:16:53.889011 | orchestrator | Tuesday 24 March 2026 04:16:48 +0000 (0:00:02.056) 0:00:14.161 ********* 2026-03-24 04:16:53.889020 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:16:53.889029 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:16:53.889038 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:16:53.889046 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:16:53.889055 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:16:53.889064 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:16:53.889072 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:16:53.889088 | orchestrator | 2026-03-24 04:16:53.889097 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-24 04:16:53.889106 | orchestrator | Tuesday 24 March 2026 04:16:49 +0000 (0:00:01.065) 0:00:15.227 ********* 2026-03-24 04:16:53.889114 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:16:53.889123 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:16:53.889132 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:16:53.889141 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:16:53.889151 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:16:53.889160 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:16:53.889170 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:16:53.889180 | orchestrator | 2026-03-24 04:16:53.889190 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-24 04:16:53.889200 | orchestrator | Tuesday 24 March 2026 04:16:50 +0000 (0:00:00.924) 0:00:16.151 ********* 2026-03-24 04:16:53.889210 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:16:53.889220 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:16:53.889230 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:16:53.889240 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:16:53.889250 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:16:53.889260 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:16:53.889270 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:16:53.889280 | orchestrator | 2026-03-24 04:16:53.889290 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-24 04:16:53.889300 | orchestrator | Tuesday 24 March 2026 04:16:51 +0000 (0:00:00.736) 0:00:16.888 ********* 2026-03-24 04:16:53.889310 | orchestrator | ok: [testbed-manager] 2026-03-24 04:16:53.889321 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:16:53.889330 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:16:53.889340 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:16:53.889349 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:16:53.889359 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:16:53.889369 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:16:53.889380 | orchestrator | 2026-03-24 04:16:53.889429 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-24 04:16:53.889440 | orchestrator | Tuesday 24 March 2026 04:16:52 +0000 (0:00:01.777) 0:00:18.665 ********* 2026-03-24 04:16:53.889460 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:55.674505 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:55.674605 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:55.674666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:55.674682 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:55.674694 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674707 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674720 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:55.674750 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:16:55.674763 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674789 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674804 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674817 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674829 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674841 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:16:55.674861 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:08.277887 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:08.278105 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:08.278127 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:08.278140 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:08.278151 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:08.278163 | orchestrator | 2026-03-24 04:17:08.278176 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-24 04:17:08.278189 | orchestrator | Tuesday 24 March 2026 04:16:56 +0000 (0:00:03.650) 0:00:22.316 ********* 2026-03-24 04:17:08.278200 | orchestrator | [WARNING]: Skipped 2026-03-24 04:17:08.278213 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-24 04:17:08.278224 | orchestrator | to this access issue: 2026-03-24 04:17:08.278236 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-24 04:17:08.278247 | orchestrator | directory 2026-03-24 04:17:08.278258 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:17:08.278271 | orchestrator | 2026-03-24 04:17:08.278282 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-24 04:17:08.278293 | orchestrator | Tuesday 24 March 2026 04:16:57 +0000 (0:00:01.291) 0:00:23.607 ********* 2026-03-24 04:17:08.278303 | orchestrator | [WARNING]: Skipped 2026-03-24 04:17:08.278314 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-24 04:17:08.278325 | orchestrator | to this access issue: 2026-03-24 04:17:08.278336 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-24 04:17:08.278346 | orchestrator | directory 2026-03-24 04:17:08.278358 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:17:08.278372 | orchestrator | 2026-03-24 04:17:08.278385 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-24 04:17:08.278442 | orchestrator | Tuesday 24 March 2026 04:16:58 +0000 (0:00:00.934) 0:00:24.542 ********* 2026-03-24 04:17:08.278456 | orchestrator | [WARNING]: Skipped 2026-03-24 04:17:08.278469 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-24 04:17:08.278481 | orchestrator | to this access issue: 2026-03-24 04:17:08.278493 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-24 04:17:08.278517 | orchestrator | directory 2026-03-24 04:17:08.278530 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:17:08.278552 | orchestrator | 2026-03-24 04:17:08.278574 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-24 04:17:08.278587 | orchestrator | Tuesday 24 March 2026 04:16:59 +0000 (0:00:00.882) 0:00:25.424 ********* 2026-03-24 04:17:08.278600 | orchestrator | [WARNING]: Skipped 2026-03-24 04:17:08.278612 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-24 04:17:08.278624 | orchestrator | to this access issue: 2026-03-24 04:17:08.278635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-24 04:17:08.278646 | orchestrator | directory 2026-03-24 04:17:08.278657 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-24 04:17:08.278685 | orchestrator | 2026-03-24 04:17:08.278717 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-24 04:17:08.278730 | orchestrator | Tuesday 24 March 2026 04:17:00 +0000 (0:00:00.867) 0:00:26.292 ********* 2026-03-24 04:17:08.278740 | orchestrator | ok: [testbed-manager] 2026-03-24 04:17:08.278751 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:17:08.278762 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:17:08.278773 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:17:08.278784 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:17:08.278794 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:17:08.278805 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:17:08.278816 | orchestrator | 2026-03-24 04:17:08.278827 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-24 04:17:08.278838 | orchestrator | Tuesday 24 March 2026 04:17:03 +0000 (0:00:02.780) 0:00:29.073 ********* 2026-03-24 04:17:08.278849 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:17:08.278861 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:17:08.278872 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:17:08.278888 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:17:08.278900 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:17:08.278910 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:17:08.278921 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-24 04:17:08.278932 | orchestrator | 2026-03-24 04:17:08.278943 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-24 04:17:08.278961 | orchestrator | Tuesday 24 March 2026 04:17:05 +0000 (0:00:02.133) 0:00:31.206 ********* 2026-03-24 04:17:08.278981 | orchestrator | ok: [testbed-manager] 2026-03-24 04:17:08.278999 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:17:08.279018 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:17:08.279037 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:17:08.279069 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:17:08.279089 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:17:08.279108 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:17:08.279127 | orchestrator | 2026-03-24 04:17:08.279145 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-24 04:17:08.279163 | orchestrator | Tuesday 24 March 2026 04:17:07 +0000 (0:00:01.798) 0:00:33.005 ********* 2026-03-24 04:17:08.279184 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:08.279220 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:08.279241 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:08.279262 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:08.279298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:09.108934 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:09.109040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:09.109056 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:09.109090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:09.109103 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:09.109114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:09.109126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:09.109165 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:09.109179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:09.109190 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:09.109210 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:09.109222 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:09.109234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:09.109246 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:09.109258 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:09.109276 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:16.756202 | orchestrator | 2026-03-24 04:17:16.756350 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-24 04:17:16.756371 | orchestrator | Tuesday 24 March 2026 04:17:09 +0000 (0:00:01.901) 0:00:34.907 ********* 2026-03-24 04:17:16.756384 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:17:16.756396 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:17:16.756434 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:17:16.756445 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:17:16.756456 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:17:16.756467 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:17:16.756498 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-24 04:17:16.756510 | orchestrator | 2026-03-24 04:17:16.756526 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-24 04:17:16.756551 | orchestrator | Tuesday 24 March 2026 04:17:11 +0000 (0:00:02.035) 0:00:36.942 ********* 2026-03-24 04:17:16.756578 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:17:16.756597 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:17:16.756615 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:17:16.756633 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:17:16.756649 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:17:16.756668 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:17:16.756686 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-24 04:17:16.756707 | orchestrator | 2026-03-24 04:17:16.756724 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-24 04:17:16.756743 | orchestrator | Tuesday 24 March 2026 04:17:13 +0000 (0:00:02.458) 0:00:39.401 ********* 2026-03-24 04:17:16.756767 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:16.756791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:16.756812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:16.756830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:16.756889 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:16.756923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:16.756943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:16.756963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-24 04:17:16.756984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:16.757006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:16.757027 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:16.757068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596558 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596578 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:17:18.596734 | orchestrator | 2026-03-24 04:17:18.596748 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-24 04:17:18.596761 | orchestrator | Tuesday 24 March 2026 04:17:17 +0000 (0:00:03.543) 0:00:42.944 ********* 2026-03-24 04:17:18.596774 | orchestrator | changed: [testbed-manager] => { 2026-03-24 04:17:18.596787 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:17:18.596799 | orchestrator | } 2026-03-24 04:17:18.596811 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:17:18.596822 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:17:18.596833 | orchestrator | } 2026-03-24 04:17:18.596846 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:17:18.596858 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:17:18.596870 | orchestrator | } 2026-03-24 04:17:18.596882 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:17:18.596894 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:17:18.596905 | orchestrator | } 2026-03-24 04:17:18.596917 | orchestrator | changed: [testbed-node-3] => { 2026-03-24 04:17:18.596929 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:17:18.596941 | orchestrator | } 2026-03-24 04:17:18.596952 | orchestrator | changed: [testbed-node-4] => { 2026-03-24 04:17:18.596963 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:17:18.596976 | orchestrator | } 2026-03-24 04:17:18.596990 | orchestrator | changed: [testbed-node-5] => { 2026-03-24 04:17:18.597002 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:17:18.597014 | orchestrator | } 2026-03-24 04:17:18.597024 | orchestrator | 2026-03-24 04:17:18.597032 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:17:18.597039 | orchestrator | Tuesday 24 March 2026 04:17:18 +0000 (0:00:00.990) 0:00:43.935 ********* 2026-03-24 04:17:18.597047 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:17:18.597055 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:18.597072 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:18.597079 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:17:18.597089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:17:18.597112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.102798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.102891 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:17:21.102906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:17:21.102917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.102944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.102973 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:17:21.102982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:17:21.102991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.103003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.103012 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:17:21.103037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:17:21.103046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.103054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.103063 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-24 04:17:21.103072 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-24 04:17:21.103089 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:17:21.103097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:17:21.103112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.103121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:17:21.103129 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:17:21.103141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-24 04:17:21.103156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:18:49.575330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:18:49.575445 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:18:49.575463 | orchestrator | 2026-03-24 04:18:49.575556 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:18:49.575581 | orchestrator | Tuesday 24 March 2026 04:17:20 +0000 (0:00:02.039) 0:00:45.975 ********* 2026-03-24 04:18:49.575592 | orchestrator | 2026-03-24 04:18:49.575604 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:18:49.575615 | orchestrator | Tuesday 24 March 2026 04:17:20 +0000 (0:00:00.079) 0:00:46.054 ********* 2026-03-24 04:18:49.575625 | orchestrator | 2026-03-24 04:18:49.575636 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:18:49.575672 | orchestrator | Tuesday 24 March 2026 04:17:20 +0000 (0:00:00.074) 0:00:46.129 ********* 2026-03-24 04:18:49.575683 | orchestrator | 2026-03-24 04:18:49.575694 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:18:49.575705 | orchestrator | Tuesday 24 March 2026 04:17:20 +0000 (0:00:00.077) 0:00:46.206 ********* 2026-03-24 04:18:49.575715 | orchestrator | 2026-03-24 04:18:49.575726 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:18:49.575737 | orchestrator | Tuesday 24 March 2026 04:17:20 +0000 (0:00:00.075) 0:00:46.282 ********* 2026-03-24 04:18:49.575747 | orchestrator | 2026-03-24 04:18:49.575758 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:18:49.575769 | orchestrator | Tuesday 24 March 2026 04:17:20 +0000 (0:00:00.306) 0:00:46.588 ********* 2026-03-24 04:18:49.575779 | orchestrator | 2026-03-24 04:18:49.575790 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-24 04:18:49.575801 | orchestrator | Tuesday 24 March 2026 04:17:20 +0000 (0:00:00.072) 0:00:46.660 ********* 2026-03-24 04:18:49.575811 | orchestrator | 2026-03-24 04:18:49.575822 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-24 04:18:49.575833 | orchestrator | Tuesday 24 March 2026 04:17:21 +0000 (0:00:00.106) 0:00:46.767 ********* 2026-03-24 04:18:49.575844 | orchestrator | changed: [testbed-manager] 2026-03-24 04:18:49.575857 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:18:49.575869 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:18:49.575882 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:18:49.575893 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:18:49.575905 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:18:49.575917 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:18:49.575929 | orchestrator | 2026-03-24 04:18:49.575941 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-24 04:18:49.575954 | orchestrator | Tuesday 24 March 2026 04:17:57 +0000 (0:00:36.098) 0:01:22.865 ********* 2026-03-24 04:18:49.575966 | orchestrator | changed: [testbed-manager] 2026-03-24 04:18:49.575978 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:18:49.575991 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:18:49.576003 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:18:49.576015 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:18:49.576028 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:18:49.576040 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:18:49.576051 | orchestrator | 2026-03-24 04:18:49.576063 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-24 04:18:49.576076 | orchestrator | Tuesday 24 March 2026 04:18:35 +0000 (0:00:37.842) 0:02:00.708 ********* 2026-03-24 04:18:49.576088 | orchestrator | ok: [testbed-manager] 2026-03-24 04:18:49.576101 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:18:49.576113 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:18:49.576125 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:18:49.576138 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:18:49.576150 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:18:49.576162 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:18:49.576175 | orchestrator | 2026-03-24 04:18:49.576187 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-24 04:18:49.576198 | orchestrator | Tuesday 24 March 2026 04:18:37 +0000 (0:00:02.138) 0:02:02.846 ********* 2026-03-24 04:18:49.576209 | orchestrator | changed: [testbed-manager] 2026-03-24 04:18:49.576219 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:18:49.576230 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:18:49.576241 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:18:49.576266 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:18:49.576277 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:18:49.576287 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:18:49.576298 | orchestrator | 2026-03-24 04:18:49.576309 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:18:49.576321 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:18:49.576341 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:18:49.576353 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:18:49.576363 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:18:49.576392 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:18:49.576404 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:18:49.576415 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:18:49.576426 | orchestrator | 2026-03-24 04:18:49.576437 | orchestrator | 2026-03-24 04:18:49.576448 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:18:49.576459 | orchestrator | Tuesday 24 March 2026 04:18:48 +0000 (0:00:11.798) 0:02:14.645 ********* 2026-03-24 04:18:49.576490 | orchestrator | =============================================================================== 2026-03-24 04:18:49.576501 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.84s 2026-03-24 04:18:49.576512 | orchestrator | common : Restart fluentd container ------------------------------------- 36.10s 2026-03-24 04:18:49.576523 | orchestrator | common : Restart cron container ---------------------------------------- 11.80s 2026-03-24 04:18:49.576534 | orchestrator | common : Copying over config.json files for services -------------------- 3.65s 2026-03-24 04:18:49.576545 | orchestrator | service-check-containers : common | Check containers -------------------- 3.54s 2026-03-24 04:18:49.576556 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.26s 2026-03-24 04:18:49.576566 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.78s 2026-03-24 04:18:49.576577 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.46s 2026-03-24 04:18:49.576588 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.14s 2026-03-24 04:18:49.576599 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.13s 2026-03-24 04:18:49.576609 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.10s 2026-03-24 04:18:49.576620 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.06s 2026-03-24 04:18:49.576631 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.04s 2026-03-24 04:18:49.576642 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.04s 2026-03-24 04:18:49.576653 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.90s 2026-03-24 04:18:49.576664 | orchestrator | common : include_tasks -------------------------------------------------- 1.88s 2026-03-24 04:18:49.576674 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.80s 2026-03-24 04:18:49.576685 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.78s 2026-03-24 04:18:49.576696 | orchestrator | common : include_tasks -------------------------------------------------- 1.77s 2026-03-24 04:18:49.576707 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.59s 2026-03-24 04:18:49.911895 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-24 04:18:52.044817 | orchestrator | 2026-03-24 04:18:52 | INFO  | Task 914b96df-eb2c-4f15-a74f-02e33977941f (loadbalancer) was prepared for execution. 2026-03-24 04:18:52.044988 | orchestrator | 2026-03-24 04:18:52 | INFO  | It takes a moment until task 914b96df-eb2c-4f15-a74f-02e33977941f (loadbalancer) has been started and output is visible here. 2026-03-24 04:19:26.186108 | orchestrator | 2026-03-24 04:19:26.186278 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:19:26.186310 | orchestrator | 2026-03-24 04:19:26.186328 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:19:26.186347 | orchestrator | Tuesday 24 March 2026 04:18:58 +0000 (0:00:01.748) 0:00:01.748 ********* 2026-03-24 04:19:26.186365 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:26.186385 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:26.186404 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:26.186424 | orchestrator | 2026-03-24 04:19:26.186445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:19:26.186464 | orchestrator | Tuesday 24 March 2026 04:19:00 +0000 (0:00:01.722) 0:00:03.471 ********* 2026-03-24 04:19:26.186485 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-24 04:19:26.186567 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-24 04:19:26.186582 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-24 04:19:26.186595 | orchestrator | 2026-03-24 04:19:26.186607 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-24 04:19:26.186620 | orchestrator | 2026-03-24 04:19:26.186636 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-24 04:19:26.186655 | orchestrator | Tuesday 24 March 2026 04:19:03 +0000 (0:00:02.968) 0:00:06.440 ********* 2026-03-24 04:19:26.186676 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:19:26.186695 | orchestrator | 2026-03-24 04:19:26.186712 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-24 04:19:26.186733 | orchestrator | Tuesday 24 March 2026 04:19:04 +0000 (0:00:01.709) 0:00:08.150 ********* 2026-03-24 04:19:26.186751 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:26.186769 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:26.186787 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:26.186806 | orchestrator | 2026-03-24 04:19:26.186825 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-24 04:19:26.186843 | orchestrator | Tuesday 24 March 2026 04:19:06 +0000 (0:00:01.949) 0:00:10.100 ********* 2026-03-24 04:19:26.186930 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:26.186949 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:26.186967 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:26.186984 | orchestrator | 2026-03-24 04:19:26.187001 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-24 04:19:26.187019 | orchestrator | Tuesday 24 March 2026 04:19:08 +0000 (0:00:01.993) 0:00:12.094 ********* 2026-03-24 04:19:26.187037 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:26.187055 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:26.187072 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:26.187090 | orchestrator | 2026-03-24 04:19:26.187109 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-24 04:19:26.187128 | orchestrator | Tuesday 24 March 2026 04:19:10 +0000 (0:00:01.656) 0:00:13.750 ********* 2026-03-24 04:19:26.187147 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:19:26.187166 | orchestrator | 2026-03-24 04:19:26.187185 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-24 04:19:26.187203 | orchestrator | Tuesday 24 March 2026 04:19:12 +0000 (0:00:01.976) 0:00:15.726 ********* 2026-03-24 04:19:26.187218 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:26.187230 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:26.187241 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:26.187251 | orchestrator | 2026-03-24 04:19:26.187262 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-24 04:19:26.187299 | orchestrator | Tuesday 24 March 2026 04:19:14 +0000 (0:00:01.630) 0:00:17.357 ********* 2026-03-24 04:19:26.187310 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-24 04:19:26.187321 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-24 04:19:26.187332 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-24 04:19:26.187343 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-24 04:19:26.187354 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-24 04:19:26.187371 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-24 04:19:26.187389 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-24 04:19:26.187410 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-24 04:19:26.187429 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-24 04:19:26.187448 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-24 04:19:26.187467 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-24 04:19:26.187485 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-24 04:19:26.187541 | orchestrator | 2026-03-24 04:19:26.187554 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-24 04:19:26.187565 | orchestrator | Tuesday 24 March 2026 04:19:17 +0000 (0:00:03.237) 0:00:20.594 ********* 2026-03-24 04:19:26.187576 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-24 04:19:26.187588 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-24 04:19:26.187599 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-24 04:19:26.187610 | orchestrator | 2026-03-24 04:19:26.187621 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-24 04:19:26.187664 | orchestrator | Tuesday 24 March 2026 04:19:19 +0000 (0:00:01.909) 0:00:22.503 ********* 2026-03-24 04:19:26.187684 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-24 04:19:26.187702 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-24 04:19:26.187718 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-24 04:19:26.187737 | orchestrator | 2026-03-24 04:19:26.187756 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-24 04:19:26.187775 | orchestrator | Tuesday 24 March 2026 04:19:21 +0000 (0:00:02.389) 0:00:24.893 ********* 2026-03-24 04:19:26.187793 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-24 04:19:26.187812 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:19:26.187830 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-24 04:19:26.187849 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:19:26.187867 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-24 04:19:26.187898 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:19:26.187916 | orchestrator | 2026-03-24 04:19:26.187934 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-24 04:19:26.187952 | orchestrator | Tuesday 24 March 2026 04:19:23 +0000 (0:00:01.896) 0:00:26.789 ********* 2026-03-24 04:19:26.187975 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:26.188022 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:26.188044 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:26.188065 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:26.188086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:26.188126 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:37.193903 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:19:37.194222 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:19:37.194288 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:19:37.194312 | orchestrator | 2026-03-24 04:19:37.194336 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-24 04:19:37.194357 | orchestrator | Tuesday 24 March 2026 04:19:26 +0000 (0:00:02.723) 0:00:29.512 ********* 2026-03-24 04:19:37.194375 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:37.194387 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:37.194398 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:37.194408 | orchestrator | 2026-03-24 04:19:37.194419 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-24 04:19:37.194430 | orchestrator | Tuesday 24 March 2026 04:19:28 +0000 (0:00:01.920) 0:00:31.433 ********* 2026-03-24 04:19:37.194441 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-24 04:19:37.194453 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-24 04:19:37.194464 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-24 04:19:37.194474 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-24 04:19:37.194485 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-24 04:19:37.194495 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-24 04:19:37.194562 | orchestrator | 2026-03-24 04:19:37.194573 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-24 04:19:37.194584 | orchestrator | Tuesday 24 March 2026 04:19:30 +0000 (0:00:02.911) 0:00:34.344 ********* 2026-03-24 04:19:37.194596 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:37.194606 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:37.194617 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:37.194627 | orchestrator | 2026-03-24 04:19:37.194638 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-24 04:19:37.194649 | orchestrator | Tuesday 24 March 2026 04:19:33 +0000 (0:00:02.318) 0:00:36.663 ********* 2026-03-24 04:19:37.194660 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:19:37.194670 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:19:37.194681 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:19:37.194691 | orchestrator | 2026-03-24 04:19:37.194702 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-24 04:19:37.194713 | orchestrator | Tuesday 24 March 2026 04:19:35 +0000 (0:00:02.241) 0:00:38.904 ********* 2026-03-24 04:19:37.194725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 04:19:37.194770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:19:37.194793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:19:37.194807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 04:19:37.194818 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:19:37.194830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 04:19:37.194842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:19:37.194854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:19:37.194865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 04:19:37.194882 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:19:37.194906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 04:19:41.348768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:19:41.348861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:19:41.348874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 04:19:41.348883 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:19:41.348893 | orchestrator | 2026-03-24 04:19:41.348901 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-24 04:19:41.348910 | orchestrator | Tuesday 24 March 2026 04:19:37 +0000 (0:00:01.618) 0:00:40.523 ********* 2026-03-24 04:19:41.348917 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:41.348946 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:41.348955 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:41.348976 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:41.348998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:19:41.349006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 04:19:41.349014 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:41.349029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:19:41.349041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 04:19:41.349054 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:54.989760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:19:54.989873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229', '__omit_place_holder__50cd7c071391fa8df860c8037c27e9306d402229'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-24 04:19:54.989887 | orchestrator | 2026-03-24 04:19:54.989900 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-24 04:19:54.989912 | orchestrator | Tuesday 24 March 2026 04:19:41 +0000 (0:00:04.158) 0:00:44.682 ********* 2026-03-24 04:19:54.989923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:54.989956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:54.989981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 04:19:54.989992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:54.990076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:54.990089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:19:54.990100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:19:54.990122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:19:54.990133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:19:54.990173 | orchestrator | 2026-03-24 04:19:54.990184 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-24 04:19:54.990194 | orchestrator | Tuesday 24 March 2026 04:19:46 +0000 (0:00:04.850) 0:00:49.532 ********* 2026-03-24 04:19:54.990204 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-24 04:19:54.990220 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-24 04:19:54.990230 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-24 04:19:54.990239 | orchestrator | 2026-03-24 04:19:54.990249 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-24 04:19:54.990259 | orchestrator | Tuesday 24 March 2026 04:19:48 +0000 (0:00:02.633) 0:00:52.165 ********* 2026-03-24 04:19:54.990269 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-24 04:19:54.990278 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-24 04:19:54.990288 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-24 04:19:54.990298 | orchestrator | 2026-03-24 04:19:54.990310 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-24 04:19:54.990321 | orchestrator | Tuesday 24 March 2026 04:19:53 +0000 (0:00:04.357) 0:00:56.523 ********* 2026-03-24 04:19:54.990332 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:19:54.990346 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:19:54.990364 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:16.436007 | orchestrator | 2026-03-24 04:20:16.436154 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-24 04:20:16.436173 | orchestrator | Tuesday 24 March 2026 04:19:54 +0000 (0:00:01.799) 0:00:58.323 ********* 2026-03-24 04:20:16.436186 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-24 04:20:16.436198 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-24 04:20:16.436210 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-24 04:20:16.436221 | orchestrator | 2026-03-24 04:20:16.436232 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-24 04:20:16.436244 | orchestrator | Tuesday 24 March 2026 04:19:58 +0000 (0:00:03.849) 0:01:02.173 ********* 2026-03-24 04:20:16.436255 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-24 04:20:16.436292 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-24 04:20:16.436307 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-24 04:20:16.436327 | orchestrator | 2026-03-24 04:20:16.436347 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-24 04:20:16.436365 | orchestrator | Tuesday 24 March 2026 04:20:01 +0000 (0:00:02.649) 0:01:04.822 ********* 2026-03-24 04:20:16.436386 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:20:16.436404 | orchestrator | 2026-03-24 04:20:16.436445 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-24 04:20:16.436478 | orchestrator | Tuesday 24 March 2026 04:20:03 +0000 (0:00:01.945) 0:01:06.768 ********* 2026-03-24 04:20:16.436497 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-24 04:20:16.436516 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-24 04:20:16.436559 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-24 04:20:16.436581 | orchestrator | 2026-03-24 04:20:16.436600 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-24 04:20:16.436618 | orchestrator | Tuesday 24 March 2026 04:20:06 +0000 (0:00:02.684) 0:01:09.452 ********* 2026-03-24 04:20:16.436637 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-24 04:20:16.436657 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-24 04:20:16.436676 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-24 04:20:16.436694 | orchestrator | 2026-03-24 04:20:16.436714 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-24 04:20:16.436735 | orchestrator | Tuesday 24 March 2026 04:20:08 +0000 (0:00:02.591) 0:01:12.043 ********* 2026-03-24 04:20:16.436752 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:16.436773 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:16.436787 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:16.436798 | orchestrator | 2026-03-24 04:20:16.436809 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-24 04:20:16.436821 | orchestrator | Tuesday 24 March 2026 04:20:10 +0000 (0:00:01.386) 0:01:13.430 ********* 2026-03-24 04:20:16.436832 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:16.436843 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:16.436854 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:16.436864 | orchestrator | 2026-03-24 04:20:16.436875 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-24 04:20:16.436886 | orchestrator | Tuesday 24 March 2026 04:20:12 +0000 (0:00:02.006) 0:01:15.436 ********* 2026-03-24 04:20:16.436917 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 04:20:16.436933 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 04:20:16.436977 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 04:20:16.436990 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:20:16.437001 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:20:16.437012 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:20:16.437024 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:20:16.437042 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:20:16.437062 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:20:20.147776 | orchestrator | 2026-03-24 04:20:20.147874 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-24 04:20:20.147891 | orchestrator | Tuesday 24 March 2026 04:20:16 +0000 (0:00:04.322) 0:01:19.759 ********* 2026-03-24 04:20:20.147907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 04:20:20.147922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:20.147935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:20.147948 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:20.147970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 04:20:20.148043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:20.148095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:20.148116 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:20.148161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 04:20:20.148181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:20.148200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:20.148219 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:20.148238 | orchestrator | 2026-03-24 04:20:20.148258 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-24 04:20:20.148276 | orchestrator | Tuesday 24 March 2026 04:20:18 +0000 (0:00:01.647) 0:01:21.407 ********* 2026-03-24 04:20:20.148295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 04:20:20.148326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:20.148359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:20.148375 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:20.148398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 04:20:31.782368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:31.782477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:31.782493 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:31.782506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 04:20:31.782517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:31.782620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:31.782636 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:31.782646 | orchestrator | 2026-03-24 04:20:31.782657 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-24 04:20:31.782668 | orchestrator | Tuesday 24 March 2026 04:20:20 +0000 (0:00:02.074) 0:01:23.481 ********* 2026-03-24 04:20:31.782678 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-24 04:20:31.782689 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-24 04:20:31.782699 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-24 04:20:31.782709 | orchestrator | 2026-03-24 04:20:31.782719 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-24 04:20:31.782729 | orchestrator | Tuesday 24 March 2026 04:20:22 +0000 (0:00:02.494) 0:01:25.976 ********* 2026-03-24 04:20:31.782738 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-24 04:20:31.782748 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-24 04:20:31.782758 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-24 04:20:31.782767 | orchestrator | 2026-03-24 04:20:31.782793 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-24 04:20:31.782804 | orchestrator | Tuesday 24 March 2026 04:20:25 +0000 (0:00:02.609) 0:01:28.585 ********* 2026-03-24 04:20:31.782814 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 04:20:31.782823 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 04:20:31.782833 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 04:20:31.782843 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:31.782852 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-24 04:20:31.782861 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 04:20:31.782871 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:31.782880 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-24 04:20:31.782890 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:31.782899 | orchestrator | 2026-03-24 04:20:31.782910 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-24 04:20:31.782921 | orchestrator | Tuesday 24 March 2026 04:20:27 +0000 (0:00:02.577) 0:01:31.162 ********* 2026-03-24 04:20:31.782933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 04:20:31.782954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 04:20:31.782971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 04:20:31.782983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:20:31.783003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:20:35.347020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:20:35.347123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:20:35.347163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:20:35.347175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:20:35.347186 | orchestrator | 2026-03-24 04:20:35.347198 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-24 04:20:35.347210 | orchestrator | Tuesday 24 March 2026 04:20:31 +0000 (0:00:03.954) 0:01:35.117 ********* 2026-03-24 04:20:35.347221 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:20:35.347232 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:20:35.347242 | orchestrator | } 2026-03-24 04:20:35.347252 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:20:35.347262 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:20:35.347272 | orchestrator | } 2026-03-24 04:20:35.347281 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:20:35.347291 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:20:35.347301 | orchestrator | } 2026-03-24 04:20:35.347311 | orchestrator | 2026-03-24 04:20:35.347321 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:20:35.347330 | orchestrator | Tuesday 24 March 2026 04:20:33 +0000 (0:00:01.360) 0:01:36.477 ********* 2026-03-24 04:20:35.347341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 04:20:35.347384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:35.347397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:35.347415 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:35.347426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 04:20:35.347436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:35.347447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:35.347462 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:35.347472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 04:20:35.347482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:20:35.347500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:20:40.873078 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:40.873189 | orchestrator | 2026-03-24 04:20:40.873207 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-24 04:20:40.873221 | orchestrator | Tuesday 24 March 2026 04:20:35 +0000 (0:00:02.201) 0:01:38.678 ********* 2026-03-24 04:20:40.873232 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:20:40.873244 | orchestrator | 2026-03-24 04:20:40.873255 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-24 04:20:40.873266 | orchestrator | Tuesday 24 March 2026 04:20:37 +0000 (0:00:01.958) 0:01:40.637 ********* 2026-03-24 04:20:40.873282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:20:40.873298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 04:20:40.873327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:40.873341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 04:20:40.873369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:20:40.873403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 04:20:40.873416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:40.873427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 04:20:40.873445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:20:40.873457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 04:20:40.873483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:42.707624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 04:20:42.707722 | orchestrator | 2026-03-24 04:20:42.707733 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-24 04:20:42.707741 | orchestrator | Tuesday 24 March 2026 04:20:41 +0000 (0:00:04.706) 0:01:45.343 ********* 2026-03-24 04:20:42.707751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:20:42.707776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 04:20:42.707783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:42.707790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 04:20:42.707814 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:42.707837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:20:42.707844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 04:20:42.707851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:42.707861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 04:20:42.707867 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:42.707873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:20:42.707884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-24 04:20:42.707894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:57.341714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-24 04:20:57.341833 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:57.341848 | orchestrator | 2026-03-24 04:20:57.341860 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-24 04:20:57.341871 | orchestrator | Tuesday 24 March 2026 04:20:43 +0000 (0:00:01.794) 0:01:47.138 ********* 2026-03-24 04:20:57.341882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:20:57.341896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:20:57.341907 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:57.341918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:20:57.341943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:20:57.341954 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:57.341964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:20:57.341993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:20:57.342003 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:20:57.342013 | orchestrator | 2026-03-24 04:20:57.342084 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-24 04:20:57.342095 | orchestrator | Tuesday 24 March 2026 04:20:45 +0000 (0:00:02.193) 0:01:49.331 ********* 2026-03-24 04:20:57.342104 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:20:57.342115 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:20:57.342125 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:20:57.342134 | orchestrator | 2026-03-24 04:20:57.342144 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-24 04:20:57.342153 | orchestrator | Tuesday 24 March 2026 04:20:48 +0000 (0:00:02.189) 0:01:51.521 ********* 2026-03-24 04:20:57.342163 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:20:57.342173 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:20:57.342183 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:20:57.342194 | orchestrator | 2026-03-24 04:20:57.342204 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-24 04:20:57.342215 | orchestrator | Tuesday 24 March 2026 04:20:51 +0000 (0:00:02.838) 0:01:54.359 ********* 2026-03-24 04:20:57.342226 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:20:57.342237 | orchestrator | 2026-03-24 04:20:57.342248 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-24 04:20:57.342258 | orchestrator | Tuesday 24 March 2026 04:20:52 +0000 (0:00:01.651) 0:01:56.011 ********* 2026-03-24 04:20:57.342291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:20:57.342307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:57.342320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:20:57.342346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:20:57.342358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:57.342369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:20:57.342403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:20:58.950705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:58.950789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:20:58.950796 | orchestrator | 2026-03-24 04:20:58.950802 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-24 04:20:58.950807 | orchestrator | Tuesday 24 March 2026 04:20:57 +0000 (0:00:04.655) 0:02:00.667 ********* 2026-03-24 04:20:58.950814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:20:58.950820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:58.950824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:20:58.950829 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:20:58.950849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:20:58.950858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:58.950862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:20:58.950867 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:20:58.950871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:20:58.950876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-24 04:20:58.950883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:21:15.090212 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:15.090325 | orchestrator | 2026-03-24 04:21:15.090340 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-24 04:21:15.090366 | orchestrator | Tuesday 24 March 2026 04:20:58 +0000 (0:00:01.615) 0:02:02.282 ********* 2026-03-24 04:21:15.090378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:15.090392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:15.090403 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:15.090414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:15.090424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:15.090433 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:15.090443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:15.090453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:15.090463 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:15.090473 | orchestrator | 2026-03-24 04:21:15.090483 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-24 04:21:15.090493 | orchestrator | Tuesday 24 March 2026 04:21:00 +0000 (0:00:01.957) 0:02:04.239 ********* 2026-03-24 04:21:15.090503 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:21:15.090514 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:21:15.090523 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:21:15.090533 | orchestrator | 2026-03-24 04:21:15.090543 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-24 04:21:15.090553 | orchestrator | Tuesday 24 March 2026 04:21:03 +0000 (0:00:02.273) 0:02:06.513 ********* 2026-03-24 04:21:15.090563 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:21:15.090572 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:21:15.090629 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:21:15.090640 | orchestrator | 2026-03-24 04:21:15.090650 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-24 04:21:15.090659 | orchestrator | Tuesday 24 March 2026 04:21:05 +0000 (0:00:02.799) 0:02:09.312 ********* 2026-03-24 04:21:15.090669 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:15.090679 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:15.090709 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:15.090719 | orchestrator | 2026-03-24 04:21:15.090729 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-24 04:21:15.090739 | orchestrator | Tuesday 24 March 2026 04:21:07 +0000 (0:00:01.288) 0:02:10.601 ********* 2026-03-24 04:21:15.090749 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:21:15.090761 | orchestrator | 2026-03-24 04:21:15.090772 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-24 04:21:15.090784 | orchestrator | Tuesday 24 March 2026 04:21:08 +0000 (0:00:01.641) 0:02:12.242 ********* 2026-03-24 04:21:15.090797 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-24 04:21:15.090831 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-24 04:21:15.090844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-24 04:21:15.090855 | orchestrator | 2026-03-24 04:21:15.090867 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-24 04:21:15.090878 | orchestrator | Tuesday 24 March 2026 04:21:12 +0000 (0:00:03.635) 0:02:15.878 ********* 2026-03-24 04:21:15.090897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-24 04:21:15.090917 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:15.090930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-24 04:21:15.090941 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:15.090959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-24 04:21:27.051515 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:27.051677 | orchestrator | 2026-03-24 04:21:27.051693 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-24 04:21:27.051704 | orchestrator | Tuesday 24 March 2026 04:21:15 +0000 (0:00:02.546) 0:02:18.425 ********* 2026-03-24 04:21:27.051715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 04:21:27.051726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 04:21:27.051736 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:27.051745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 04:21:27.051753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 04:21:27.051779 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:27.051788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 04:21:27.051796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-24 04:21:27.051805 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:27.051812 | orchestrator | 2026-03-24 04:21:27.051821 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-24 04:21:27.051829 | orchestrator | Tuesday 24 March 2026 04:21:17 +0000 (0:00:02.739) 0:02:21.164 ********* 2026-03-24 04:21:27.051837 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:27.051845 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:27.051853 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:27.051861 | orchestrator | 2026-03-24 04:21:27.051869 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-24 04:21:27.051877 | orchestrator | Tuesday 24 March 2026 04:21:19 +0000 (0:00:01.436) 0:02:22.600 ********* 2026-03-24 04:21:27.051884 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:27.051892 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:27.051900 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:27.051908 | orchestrator | 2026-03-24 04:21:27.051916 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-24 04:21:27.051924 | orchestrator | Tuesday 24 March 2026 04:21:21 +0000 (0:00:02.308) 0:02:24.909 ********* 2026-03-24 04:21:27.051932 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:21:27.051940 | orchestrator | 2026-03-24 04:21:27.051948 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-24 04:21:27.051956 | orchestrator | Tuesday 24 March 2026 04:21:23 +0000 (0:00:01.728) 0:02:26.637 ********* 2026-03-24 04:21:27.051990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:21:27.052003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:21:27.052019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 04:21:27.052028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 04:21:27.052038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:21:27.052057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.115692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.115830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.115854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:21:29.115872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.115904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.115940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.115966 | orchestrator | 2026-03-24 04:21:29.115982 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-24 04:21:29.115997 | orchestrator | Tuesday 24 March 2026 04:21:28 +0000 (0:00:04.876) 0:02:31.514 ********* 2026-03-24 04:21:29.116013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:21:29.116029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.116042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.116062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 04:21:29.116075 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:29.116104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:21:40.134108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:21:40.134250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 04:21:40.134269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 04:21:40.134284 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:40.134316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:21:40.134357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:21:40.134392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-24 04:21:40.134405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-24 04:21:40.134417 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:40.134429 | orchestrator | 2026-03-24 04:21:40.134441 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-24 04:21:40.134453 | orchestrator | Tuesday 24 March 2026 04:21:30 +0000 (0:00:02.066) 0:02:33.581 ********* 2026-03-24 04:21:40.134465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:40.134479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:40.134491 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:40.134503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:40.134514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:40.134526 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:40.134537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:40.134567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:21:40.134593 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:40.134649 | orchestrator | 2026-03-24 04:21:40.134668 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-24 04:21:40.134686 | orchestrator | Tuesday 24 March 2026 04:21:32 +0000 (0:00:02.034) 0:02:35.616 ********* 2026-03-24 04:21:40.134704 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:21:40.134722 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:21:40.134740 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:21:40.134758 | orchestrator | 2026-03-24 04:21:40.134777 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-24 04:21:40.134795 | orchestrator | Tuesday 24 March 2026 04:21:34 +0000 (0:00:02.208) 0:02:37.825 ********* 2026-03-24 04:21:40.134812 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:21:40.134832 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:21:40.134851 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:21:40.134869 | orchestrator | 2026-03-24 04:21:40.134885 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-24 04:21:40.134897 | orchestrator | Tuesday 24 March 2026 04:21:37 +0000 (0:00:02.755) 0:02:40.580 ********* 2026-03-24 04:21:40.134907 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:40.134919 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:40.134929 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:40.134940 | orchestrator | 2026-03-24 04:21:40.134951 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-24 04:21:40.134962 | orchestrator | Tuesday 24 March 2026 04:21:38 +0000 (0:00:01.557) 0:02:42.138 ********* 2026-03-24 04:21:40.134973 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:40.134984 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:40.135005 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:21:45.517686 | orchestrator | 2026-03-24 04:21:45.517846 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-24 04:21:45.517867 | orchestrator | Tuesday 24 March 2026 04:21:40 +0000 (0:00:01.329) 0:02:43.468 ********* 2026-03-24 04:21:45.517879 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:21:45.517891 | orchestrator | 2026-03-24 04:21:45.517903 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-24 04:21:45.517914 | orchestrator | Tuesday 24 March 2026 04:21:41 +0000 (0:00:01.823) 0:02:45.292 ********* 2026-03-24 04:21:45.517932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:21:45.517983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 04:21:45.518084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 04:21:45.518116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 04:21:45.518129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 04:21:45.518204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:21:45.518218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 04:21:45.518233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:21:45.518257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 04:21:45.518276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:21:45.518376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 04:21:47.280254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 04:21:47.280435 | orchestrator | 2026-03-24 04:21:47.280450 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-24 04:21:47.280462 | orchestrator | Tuesday 24 March 2026 04:21:46 +0000 (0:00:04.737) 0:02:50.029 ********* 2026-03-24 04:21:47.280481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:21:47.280504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 04:21:48.557555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.557722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.557741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.557770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:21:48.557786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.557815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 04:21:48.557828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.557847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.557860 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:21:48.557874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.557886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.558899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.558964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 04:21:48.558978 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:21:48.559015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:22:03.543475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-24 04:22:03.543594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-24 04:22:03.543613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-24 04:22:03.543675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-24 04:22:03.543688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:22:03.543739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-24 04:22:03.543753 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:03.543767 | orchestrator | 2026-03-24 04:22:03.543780 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-24 04:22:03.543792 | orchestrator | Tuesday 24 March 2026 04:21:48 +0000 (0:00:01.866) 0:02:51.895 ********* 2026-03-24 04:22:03.543820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:03.543836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:03.543849 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:03.543860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:03.543871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:03.543882 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:03.543893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:03.543905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:03.543916 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:03.543927 | orchestrator | 2026-03-24 04:22:03.543938 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-24 04:22:03.543949 | orchestrator | Tuesday 24 March 2026 04:21:50 +0000 (0:00:02.057) 0:02:53.953 ********* 2026-03-24 04:22:03.543960 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:03.543972 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:03.543982 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:03.543995 | orchestrator | 2026-03-24 04:22:03.544006 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-24 04:22:03.544019 | orchestrator | Tuesday 24 March 2026 04:21:52 +0000 (0:00:02.227) 0:02:56.180 ********* 2026-03-24 04:22:03.544031 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:03.544043 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:03.544056 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:03.544068 | orchestrator | 2026-03-24 04:22:03.544080 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-24 04:22:03.544092 | orchestrator | Tuesday 24 March 2026 04:21:55 +0000 (0:00:03.064) 0:02:59.244 ********* 2026-03-24 04:22:03.544119 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:03.544147 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:03.544179 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:03.544200 | orchestrator | 2026-03-24 04:22:03.544218 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-24 04:22:03.544237 | orchestrator | Tuesday 24 March 2026 04:21:57 +0000 (0:00:01.331) 0:03:00.576 ********* 2026-03-24 04:22:03.544256 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:22:03.544274 | orchestrator | 2026-03-24 04:22:03.544293 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-24 04:22:03.544312 | orchestrator | Tuesday 24 March 2026 04:21:59 +0000 (0:00:01.805) 0:03:02.381 ********* 2026-03-24 04:22:03.544359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 04:22:04.641134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 04:22:04.641281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 04:22:04.641319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 04:22:04.641347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-24 04:22:04.641370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 04:22:07.989511 | orchestrator | 2026-03-24 04:22:07.989608 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-24 04:22:07.989650 | orchestrator | Tuesday 24 March 2026 04:22:04 +0000 (0:00:05.597) 0:03:07.978 ********* 2026-03-24 04:22:07.989708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 04:22:07.989724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 04:22:07.989736 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:07.989766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 04:22:07.989792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 04:22:07.989803 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:07.989821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-24 04:22:26.021362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-24 04:22:26.021484 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:26.021504 | orchestrator | 2026-03-24 04:22:26.021518 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-24 04:22:26.021530 | orchestrator | Tuesday 24 March 2026 04:22:09 +0000 (0:00:04.466) 0:03:12.445 ********* 2026-03-24 04:22:26.021543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 04:22:26.021578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 04:22:26.021591 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:26.021603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 04:22:26.021691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 04:22:26.021716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 04:22:26.021728 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:26.021740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-24 04:22:26.021751 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:26.021762 | orchestrator | 2026-03-24 04:22:26.021774 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-24 04:22:26.021785 | orchestrator | Tuesday 24 March 2026 04:22:13 +0000 (0:00:04.378) 0:03:16.824 ********* 2026-03-24 04:22:26.021796 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:26.021809 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:26.021820 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:26.021830 | orchestrator | 2026-03-24 04:22:26.021841 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-24 04:22:26.021852 | orchestrator | Tuesday 24 March 2026 04:22:15 +0000 (0:00:02.290) 0:03:19.114 ********* 2026-03-24 04:22:26.021863 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:26.021874 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:26.021885 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:26.021895 | orchestrator | 2026-03-24 04:22:26.021906 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-24 04:22:26.021926 | orchestrator | Tuesday 24 March 2026 04:22:18 +0000 (0:00:02.694) 0:03:21.809 ********* 2026-03-24 04:22:26.021938 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:26.021949 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:26.021959 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:26.021970 | orchestrator | 2026-03-24 04:22:26.021981 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-24 04:22:26.021991 | orchestrator | Tuesday 24 March 2026 04:22:19 +0000 (0:00:01.442) 0:03:23.252 ********* 2026-03-24 04:22:26.022002 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:22:26.022013 | orchestrator | 2026-03-24 04:22:26.022091 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-24 04:22:26.022102 | orchestrator | Tuesday 24 March 2026 04:22:21 +0000 (0:00:01.680) 0:03:24.932 ********* 2026-03-24 04:22:26.022114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:22:26.022137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:22:42.338347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:22:42.338489 | orchestrator | 2026-03-24 04:22:42.338513 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-24 04:22:42.338532 | orchestrator | Tuesday 24 March 2026 04:22:26 +0000 (0:00:04.422) 0:03:29.355 ********* 2026-03-24 04:22:42.338551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:22:42.338600 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:42.338620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:22:42.338637 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:42.338684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:22:42.338702 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:42.338717 | orchestrator | 2026-03-24 04:22:42.338733 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-24 04:22:42.338750 | orchestrator | Tuesday 24 March 2026 04:22:27 +0000 (0:00:01.704) 0:03:31.060 ********* 2026-03-24 04:22:42.338769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:42.338791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:42.338810 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:42.338859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:42.338878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:42.338897 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:42.338923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:42.338941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:22:42.338971 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:42.338989 | orchestrator | 2026-03-24 04:22:42.339007 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-24 04:22:42.339025 | orchestrator | Tuesday 24 March 2026 04:22:29 +0000 (0:00:01.422) 0:03:32.483 ********* 2026-03-24 04:22:42.339044 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:42.339061 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:42.339078 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:42.339096 | orchestrator | 2026-03-24 04:22:42.339113 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-24 04:22:42.339131 | orchestrator | Tuesday 24 March 2026 04:22:31 +0000 (0:00:02.303) 0:03:34.786 ********* 2026-03-24 04:22:42.339149 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:42.339167 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:42.339185 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:42.339200 | orchestrator | 2026-03-24 04:22:42.339216 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-24 04:22:42.339232 | orchestrator | Tuesday 24 March 2026 04:22:34 +0000 (0:00:02.982) 0:03:37.769 ********* 2026-03-24 04:22:42.339248 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:42.339265 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:42.339282 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:42.339299 | orchestrator | 2026-03-24 04:22:42.339317 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-24 04:22:42.339335 | orchestrator | Tuesday 24 March 2026 04:22:35 +0000 (0:00:01.372) 0:03:39.141 ********* 2026-03-24 04:22:42.339352 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:22:42.339369 | orchestrator | 2026-03-24 04:22:42.339387 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-24 04:22:42.339405 | orchestrator | Tuesday 24 March 2026 04:22:37 +0000 (0:00:01.852) 0:03:40.994 ********* 2026-03-24 04:22:42.339442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 04:22:44.040951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 04:22:44.041083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-24 04:22:44.041124 | orchestrator | 2026-03-24 04:22:44.041139 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-24 04:22:44.041152 | orchestrator | Tuesday 24 March 2026 04:22:42 +0000 (0:00:04.675) 0:03:45.669 ********* 2026-03-24 04:22:44.041165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 04:22:44.041178 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:44.041202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 04:22:52.781985 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:52.782165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-24 04:22:52.782188 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:52.782201 | orchestrator | 2026-03-24 04:22:52.782214 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-24 04:22:52.782227 | orchestrator | Tuesday 24 March 2026 04:22:44 +0000 (0:00:01.711) 0:03:47.381 ********* 2026-03-24 04:22:52.782343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-24 04:22:52.782367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 04:22:52.782386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-24 04:22:52.782400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 04:22:52.782412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-24 04:22:52.782424 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:52.782456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-24 04:22:52.782468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 04:22:52.782480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-24 04:22:52.782494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 04:22:52.782519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-24 04:22:52.782532 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:52.782545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-24 04:22:52.782559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 04:22:52.782581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-24 04:22:52.782595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-24 04:22:52.782607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-24 04:22:52.782620 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:52.782633 | orchestrator | 2026-03-24 04:22:52.782646 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-24 04:22:52.782728 | orchestrator | Tuesday 24 March 2026 04:22:46 +0000 (0:00:02.094) 0:03:49.475 ********* 2026-03-24 04:22:52.782744 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:52.782764 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:52.782775 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:52.782786 | orchestrator | 2026-03-24 04:22:52.782797 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-24 04:22:52.782808 | orchestrator | Tuesday 24 March 2026 04:22:48 +0000 (0:00:02.218) 0:03:51.694 ********* 2026-03-24 04:22:52.782819 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:22:52.782829 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:22:52.782840 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:22:52.782851 | orchestrator | 2026-03-24 04:22:52.782862 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-24 04:22:52.782873 | orchestrator | Tuesday 24 March 2026 04:22:51 +0000 (0:00:02.890) 0:03:54.585 ********* 2026-03-24 04:22:52.782884 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:22:52.782895 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:22:52.782906 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:22:52.782916 | orchestrator | 2026-03-24 04:22:52.782927 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-24 04:22:52.782939 | orchestrator | Tuesday 24 March 2026 04:22:52 +0000 (0:00:01.330) 0:03:55.915 ********* 2026-03-24 04:22:52.782958 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:02.458244 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:02.458386 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:02.458411 | orchestrator | 2026-03-24 04:23:02.458431 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-24 04:23:02.458450 | orchestrator | Tuesday 24 March 2026 04:22:53 +0000 (0:00:01.358) 0:03:57.274 ********* 2026-03-24 04:23:02.458472 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:23:02.458488 | orchestrator | 2026-03-24 04:23:02.458505 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-24 04:23:02.458523 | orchestrator | Tuesday 24 March 2026 04:22:55 +0000 (0:00:01.791) 0:03:59.065 ********* 2026-03-24 04:23:02.458544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-24 04:23:02.458596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 04:23:02.458615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 04:23:02.458651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-24 04:23:02.458748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 04:23:02.458770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 04:23:02.458806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-24 04:23:02.458831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 04:23:02.458861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 04:23:02.458885 | orchestrator | 2026-03-24 04:23:02.458902 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-24 04:23:02.458919 | orchestrator | Tuesday 24 March 2026 04:23:00 +0000 (0:00:04.823) 0:04:03.889 ********* 2026-03-24 04:23:02.458946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-24 04:23:04.147048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 04:23:04.147154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 04:23:04.147170 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:04.147185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-24 04:23:04.147214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 04:23:04.147225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 04:23:04.147235 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:04.147264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-24 04:23:04.147297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-24 04:23:04.147309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-24 04:23:04.147319 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:04.147329 | orchestrator | 2026-03-24 04:23:04.147341 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-24 04:23:04.147352 | orchestrator | Tuesday 24 March 2026 04:23:02 +0000 (0:00:01.901) 0:04:05.791 ********* 2026-03-24 04:23:04.147364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-24 04:23:04.147382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-24 04:23:04.147394 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:04.147405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-24 04:23:04.147415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-24 04:23:04.147425 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:04.147435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-24 04:23:04.147452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-24 04:23:04.147463 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:04.147472 | orchestrator | 2026-03-24 04:23:04.147482 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-24 04:23:04.147498 | orchestrator | Tuesday 24 March 2026 04:23:04 +0000 (0:00:01.689) 0:04:07.480 ********* 2026-03-24 04:23:19.787889 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:23:19.788025 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:23:19.788050 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:23:19.788065 | orchestrator | 2026-03-24 04:23:19.788082 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-24 04:23:19.788100 | orchestrator | Tuesday 24 March 2026 04:23:06 +0000 (0:00:02.226) 0:04:09.707 ********* 2026-03-24 04:23:19.788115 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:23:19.788130 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:23:19.788144 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:23:19.788158 | orchestrator | 2026-03-24 04:23:19.788173 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-24 04:23:19.788189 | orchestrator | Tuesday 24 March 2026 04:23:09 +0000 (0:00:03.200) 0:04:12.907 ********* 2026-03-24 04:23:19.788203 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:19.788220 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:19.788233 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:19.788247 | orchestrator | 2026-03-24 04:23:19.788260 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-24 04:23:19.788269 | orchestrator | Tuesday 24 March 2026 04:23:10 +0000 (0:00:01.364) 0:04:14.272 ********* 2026-03-24 04:23:19.788278 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:23:19.788287 | orchestrator | 2026-03-24 04:23:19.788296 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-24 04:23:19.788305 | orchestrator | Tuesday 24 March 2026 04:23:12 +0000 (0:00:01.835) 0:04:16.108 ********* 2026-03-24 04:23:19.788319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:23:19.788354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:23:19.788400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:23:19.788443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:23:19.788461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:23:19.788478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:23:19.788494 | orchestrator | 2026-03-24 04:23:19.788510 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-24 04:23:19.788536 | orchestrator | Tuesday 24 March 2026 04:23:18 +0000 (0:00:05.294) 0:04:21.402 ********* 2026-03-24 04:23:19.788642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:23:19.788672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:23:32.660584 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:32.660727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:23:32.660749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:23:32.660762 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:32.660789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:23:32.660822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:23:32.660833 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:32.660843 | orchestrator | 2026-03-24 04:23:32.660854 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-24 04:23:32.660865 | orchestrator | Tuesday 24 March 2026 04:23:19 +0000 (0:00:01.719) 0:04:23.122 ********* 2026-03-24 04:23:32.660891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:32.660905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:32.660917 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:32.660927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:32.660937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:32.660947 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:32.660956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:32.660966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:32.660976 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:32.660986 | orchestrator | 2026-03-24 04:23:32.660995 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-24 04:23:32.661005 | orchestrator | Tuesday 24 March 2026 04:23:21 +0000 (0:00:02.063) 0:04:25.185 ********* 2026-03-24 04:23:32.661023 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:23:32.661033 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:23:32.661043 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:23:32.661053 | orchestrator | 2026-03-24 04:23:32.661062 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-24 04:23:32.661072 | orchestrator | Tuesday 24 March 2026 04:23:24 +0000 (0:00:02.327) 0:04:27.512 ********* 2026-03-24 04:23:32.661081 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:23:32.661091 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:23:32.661100 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:23:32.661110 | orchestrator | 2026-03-24 04:23:32.661119 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-24 04:23:32.661130 | orchestrator | Tuesday 24 March 2026 04:23:27 +0000 (0:00:02.859) 0:04:30.372 ********* 2026-03-24 04:23:32.661141 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:23:32.661152 | orchestrator | 2026-03-24 04:23:32.661163 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-24 04:23:32.661174 | orchestrator | Tuesday 24 March 2026 04:23:29 +0000 (0:00:02.046) 0:04:32.418 ********* 2026-03-24 04:23:32.661192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:23:32.661206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:23:32.661226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:23:34.483202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:23:34.483216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 04:23:34.483326 | orchestrator | 2026-03-24 04:23:34.483349 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-24 04:23:34.483369 | orchestrator | Tuesday 24 March 2026 04:23:33 +0000 (0:00:04.702) 0:04:37.121 ********* 2026-03-24 04:23:34.483392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:23:34.483424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.673379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674394 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:37.674429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:23:37.674443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674527 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:37.674539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:23:37.674551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-24 04:23:37.674591 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:37.674602 | orchestrator | 2026-03-24 04:23:37.674615 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-24 04:23:37.674628 | orchestrator | Tuesday 24 March 2026 04:23:35 +0000 (0:00:01.915) 0:04:39.036 ********* 2026-03-24 04:23:37.674640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:37.674655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:37.674674 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:37.674686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:37.674735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:53.812822 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:53.812901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:53.812911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:23:53.812917 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:53.812922 | orchestrator | 2026-03-24 04:23:53.812927 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-24 04:23:53.812932 | orchestrator | Tuesday 24 March 2026 04:23:37 +0000 (0:00:01.970) 0:04:41.006 ********* 2026-03-24 04:23:53.812936 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:23:53.812940 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:23:53.812944 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:23:53.812948 | orchestrator | 2026-03-24 04:23:53.812952 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-24 04:23:53.812956 | orchestrator | Tuesday 24 March 2026 04:23:39 +0000 (0:00:02.259) 0:04:43.266 ********* 2026-03-24 04:23:53.812960 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:23:53.812963 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:23:53.812967 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:23:53.812971 | orchestrator | 2026-03-24 04:23:53.812975 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-24 04:23:53.812979 | orchestrator | Tuesday 24 March 2026 04:23:42 +0000 (0:00:02.967) 0:04:46.233 ********* 2026-03-24 04:23:53.812983 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:23:53.812987 | orchestrator | 2026-03-24 04:23:53.812991 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-24 04:23:53.812995 | orchestrator | Tuesday 24 March 2026 04:23:45 +0000 (0:00:02.689) 0:04:48.923 ********* 2026-03-24 04:23:53.812998 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:23:53.813002 | orchestrator | 2026-03-24 04:23:53.813017 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-24 04:23:53.813021 | orchestrator | Tuesday 24 March 2026 04:23:50 +0000 (0:00:04.457) 0:04:53.380 ********* 2026-03-24 04:23:53.813028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:23:53.813059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 04:23:53.813065 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:53.813070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:23:53.813074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 04:23:53.813082 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:53.813091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:23:57.670598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 04:23:57.670741 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:23:57.670762 | orchestrator | 2026-03-24 04:23:57.670777 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-24 04:23:57.670789 | orchestrator | Tuesday 24 March 2026 04:23:53 +0000 (0:00:03.754) 0:04:57.135 ********* 2026-03-24 04:23:57.670810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:23:57.670847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 04:23:57.670860 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:23:57.670894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:23:57.670914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 04:23:57.670934 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:23:57.670948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:23:57.670982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-24 04:24:13.472155 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:13.472274 | orchestrator | 2026-03-24 04:24:13.472291 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-24 04:24:13.472305 | orchestrator | Tuesday 24 March 2026 04:23:57 +0000 (0:00:03.854) 0:05:00.990 ********* 2026-03-24 04:24:13.472319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 04:24:13.472354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 04:24:13.472390 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:13.472403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 04:24:13.472415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 04:24:13.472426 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:13.472438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 04:24:13.472449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-24 04:24:13.472460 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:13.472471 | orchestrator | 2026-03-24 04:24:13.472482 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-24 04:24:13.472494 | orchestrator | Tuesday 24 March 2026 04:24:01 +0000 (0:00:03.691) 0:05:04.682 ********* 2026-03-24 04:24:13.472505 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:24:13.472532 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:24:13.472544 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:24:13.472555 | orchestrator | 2026-03-24 04:24:13.472566 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-24 04:24:13.472577 | orchestrator | Tuesday 24 March 2026 04:24:04 +0000 (0:00:02.788) 0:05:07.471 ********* 2026-03-24 04:24:13.472588 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:13.472603 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:13.472623 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:13.472642 | orchestrator | 2026-03-24 04:24:13.472661 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-24 04:24:13.472679 | orchestrator | Tuesday 24 March 2026 04:24:06 +0000 (0:00:02.732) 0:05:10.203 ********* 2026-03-24 04:24:13.472698 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:13.472774 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:13.472795 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:13.472815 | orchestrator | 2026-03-24 04:24:13.472835 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-24 04:24:13.472854 | orchestrator | Tuesday 24 March 2026 04:24:08 +0000 (0:00:01.369) 0:05:11.573 ********* 2026-03-24 04:24:13.472873 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:24:13.472885 | orchestrator | 2026-03-24 04:24:13.472895 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-24 04:24:13.472906 | orchestrator | Tuesday 24 March 2026 04:24:10 +0000 (0:00:02.335) 0:05:13.909 ********* 2026-03-24 04:24:13.472927 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 04:24:13.472941 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 04:24:13.472952 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 04:24:13.472964 | orchestrator | 2026-03-24 04:24:13.473000 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-24 04:24:13.473026 | orchestrator | Tuesday 24 March 2026 04:24:13 +0000 (0:00:02.778) 0:05:16.687 ********* 2026-03-24 04:24:13.473049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 04:24:29.051555 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:29.051676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 04:24:29.051697 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:29.051727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 04:24:29.051795 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:29.051807 | orchestrator | 2026-03-24 04:24:29.051820 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-24 04:24:29.051832 | orchestrator | Tuesday 24 March 2026 04:24:15 +0000 (0:00:02.050) 0:05:18.738 ********* 2026-03-24 04:24:29.051845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-24 04:24:29.051857 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:29.051869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-24 04:24:29.051880 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:29.051893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-24 04:24:29.051904 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:29.051915 | orchestrator | 2026-03-24 04:24:29.051926 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-24 04:24:29.051937 | orchestrator | Tuesday 24 March 2026 04:24:16 +0000 (0:00:01.557) 0:05:20.295 ********* 2026-03-24 04:24:29.051948 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:29.051959 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:29.051970 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:29.051981 | orchestrator | 2026-03-24 04:24:29.051992 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-24 04:24:29.052003 | orchestrator | Tuesday 24 March 2026 04:24:18 +0000 (0:00:01.679) 0:05:21.975 ********* 2026-03-24 04:24:29.052014 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:29.052025 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:29.052059 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:29.052072 | orchestrator | 2026-03-24 04:24:29.052085 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-24 04:24:29.052097 | orchestrator | Tuesday 24 March 2026 04:24:21 +0000 (0:00:02.502) 0:05:24.477 ********* 2026-03-24 04:24:29.052110 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:29.052122 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:29.052135 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:29.052147 | orchestrator | 2026-03-24 04:24:29.052160 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-24 04:24:29.052172 | orchestrator | Tuesday 24 March 2026 04:24:22 +0000 (0:00:01.553) 0:05:26.031 ********* 2026-03-24 04:24:29.052185 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:24:29.052197 | orchestrator | 2026-03-24 04:24:29.052227 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-24 04:24:29.052239 | orchestrator | Tuesday 24 March 2026 04:24:24 +0000 (0:00:01.983) 0:05:28.015 ********* 2026-03-24 04:24:29.052294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:24:29.052315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:29.052331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-24 04:24:29.052355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-24 04:24:29.052379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:29.237198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:29.237314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:29.237333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 04:24:29.237346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:29.237380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:29.237393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-24 04:24:29.237422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:29.237440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:29.237455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 04:24:29.237468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:29.237488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:24:29.237509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:29.315242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-24 04:24:29.315368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-24 04:24:29.315407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:29.315421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:29.315452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:24:29.315472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:29.315485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:29.315507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-24 04:24:29.315519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 04:24:29.315539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-24 04:24:30.556254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:30.556361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:30.556398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:30.556412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:30.556432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-24 04:24:30.556454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:30.556514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:30.556539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:30.556560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 04:24:30.556598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 04:24:30.556622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:30.556643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:30.556687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.558244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-24 04:24:31.558357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.558372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.558386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 04:24:31.558398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:31.558408 | orchestrator | 2026-03-24 04:24:31.558419 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-24 04:24:31.558442 | orchestrator | Tuesday 24 March 2026 04:24:30 +0000 (0:00:05.878) 0:05:33.893 ********* 2026-03-24 04:24:31.558470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:24:31.558488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.558499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-24 04:24:31.558509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-24 04:24:31.558530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.694900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.694995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.695012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 04:24:31.695026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:31.695040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.695075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-24 04:24:31.695119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.695164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.695181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 04:24:31.695198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:31.695219 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:31.695247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:24:31.695291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.753644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-24 04:24:31.753848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-24 04:24:31.753882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.753905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.753958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.754007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:24:31.754106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 04:24:31.754238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.754274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:31.754302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-24 04:24:31.754357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.842998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-24 04:24:31.843090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.843103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-24 04:24:31.843128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.843157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.843166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:31.843191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.843201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-24 04:24:31.843210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 04:24:31.843230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:31.843240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:31.843254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:47.516738 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:47.516946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-24 04:24:47.516968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-24 04:24:47.516982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-24 04:24:47.517036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-24 04:24:47.517052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-24 04:24:47.517064 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:47.517075 | orchestrator | 2026-03-24 04:24:47.517088 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-24 04:24:47.517100 | orchestrator | Tuesday 24 March 2026 04:24:32 +0000 (0:00:02.293) 0:05:36.186 ********* 2026-03-24 04:24:47.517112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:24:47.517147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:24:47.517161 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:24:47.517172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:24:47.517184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:24:47.517195 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:24:47.517205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:24:47.517216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:24:47.517227 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:24:47.517238 | orchestrator | 2026-03-24 04:24:47.517249 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-24 04:24:47.517273 | orchestrator | Tuesday 24 March 2026 04:24:35 +0000 (0:00:02.498) 0:05:38.685 ********* 2026-03-24 04:24:47.517285 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:24:47.517298 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:24:47.517311 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:24:47.517323 | orchestrator | 2026-03-24 04:24:47.517335 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-24 04:24:47.517348 | orchestrator | Tuesday 24 March 2026 04:24:37 +0000 (0:00:02.156) 0:05:40.842 ********* 2026-03-24 04:24:47.517360 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:24:47.517372 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:24:47.517385 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:24:47.517396 | orchestrator | 2026-03-24 04:24:47.517409 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-24 04:24:47.517422 | orchestrator | Tuesday 24 March 2026 04:24:40 +0000 (0:00:02.981) 0:05:43.823 ********* 2026-03-24 04:24:47.517434 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:24:47.517446 | orchestrator | 2026-03-24 04:24:47.517458 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-24 04:24:47.517471 | orchestrator | Tuesday 24 March 2026 04:24:42 +0000 (0:00:02.468) 0:05:46.292 ********* 2026-03-24 04:24:47.517491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-24 04:24:47.517516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-24 04:25:04.187850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-24 04:25:04.187984 | orchestrator | 2026-03-24 04:25:04.188001 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-24 04:25:04.188013 | orchestrator | Tuesday 24 March 2026 04:24:47 +0000 (0:00:04.558) 0:05:50.851 ********* 2026-03-24 04:25:04.188024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-24 04:25:04.188050 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:04.188063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-24 04:25:04.188074 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:04.188103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-24 04:25:04.188122 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:25:04.188132 | orchestrator | 2026-03-24 04:25:04.188142 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-24 04:25:04.188152 | orchestrator | Tuesday 24 March 2026 04:24:49 +0000 (0:00:01.546) 0:05:52.397 ********* 2026-03-24 04:25:04.188163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:25:04.188176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:25:04.188187 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:04.188198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:25:04.188208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:25:04.188218 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:04.188228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:25:04.188238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:25:04.188252 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:25:04.188262 | orchestrator | 2026-03-24 04:25:04.188272 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-24 04:25:04.188281 | orchestrator | Tuesday 24 March 2026 04:24:50 +0000 (0:00:01.826) 0:05:54.223 ********* 2026-03-24 04:25:04.188291 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:25:04.188302 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:25:04.188311 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:25:04.188321 | orchestrator | 2026-03-24 04:25:04.188330 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-24 04:25:04.188342 | orchestrator | Tuesday 24 March 2026 04:24:53 +0000 (0:00:02.289) 0:05:56.512 ********* 2026-03-24 04:25:04.188353 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:25:04.188364 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:25:04.188375 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:25:04.188386 | orchestrator | 2026-03-24 04:25:04.188397 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-24 04:25:04.188407 | orchestrator | Tuesday 24 March 2026 04:24:56 +0000 (0:00:02.927) 0:05:59.440 ********* 2026-03-24 04:25:04.188418 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:25:04.188429 | orchestrator | 2026-03-24 04:25:04.188440 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-24 04:25:04.188450 | orchestrator | Tuesday 24 March 2026 04:24:58 +0000 (0:00:02.262) 0:06:01.703 ********* 2026-03-24 04:25:04.188470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:25:05.287458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:25:05.287588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:25:05.287606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:25:05.287640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:25:05.287671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:25:05.287684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:25:05.287695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:25:05.287711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:25:05.287722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:25:05.287748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:25:06.032443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:25:06.032538 | orchestrator | 2026-03-24 04:25:06.032554 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-24 04:25:06.032567 | orchestrator | Tuesday 24 March 2026 04:25:05 +0000 (0:00:06.926) 0:06:08.630 ********* 2026-03-24 04:25:06.032599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:25:06.032614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:25:06.032648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:25:06.032677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:25:06.032689 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:06.032702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:25:06.032719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:25:06.032739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:25:06.032752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:25:06.032830 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:06.032855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:25:24.481486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:25:24.481626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-24 04:25:24.481669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-24 04:25:24.481684 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:25:24.481697 | orchestrator | 2026-03-24 04:25:24.481709 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-24 04:25:24.481722 | orchestrator | Tuesday 24 March 2026 04:25:07 +0000 (0:00:01.831) 0:06:10.461 ********* 2026-03-24 04:25:24.481733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481785 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:24.481828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481893 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:24.481904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:25:24.481964 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:25:24.481975 | orchestrator | 2026-03-24 04:25:24.481986 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-24 04:25:24.481999 | orchestrator | Tuesday 24 March 2026 04:25:09 +0000 (0:00:02.528) 0:06:12.990 ********* 2026-03-24 04:25:24.482011 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:25:24.482087 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:25:24.482100 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:25:24.482111 | orchestrator | 2026-03-24 04:25:24.482124 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-24 04:25:24.482136 | orchestrator | Tuesday 24 March 2026 04:25:11 +0000 (0:00:02.263) 0:06:15.253 ********* 2026-03-24 04:25:24.482148 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:25:24.482160 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:25:24.482172 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:25:24.482184 | orchestrator | 2026-03-24 04:25:24.482197 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-24 04:25:24.482209 | orchestrator | Tuesday 24 March 2026 04:25:14 +0000 (0:00:02.742) 0:06:17.995 ********* 2026-03-24 04:25:24.482221 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:25:24.482233 | orchestrator | 2026-03-24 04:25:24.482245 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-24 04:25:24.482257 | orchestrator | Tuesday 24 March 2026 04:25:17 +0000 (0:00:02.802) 0:06:20.798 ********* 2026-03-24 04:25:24.482270 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-24 04:25:24.482284 | orchestrator | 2026-03-24 04:25:24.482296 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-24 04:25:24.482308 | orchestrator | Tuesday 24 March 2026 04:25:19 +0000 (0:00:01.609) 0:06:22.408 ********* 2026-03-24 04:25:24.482322 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-24 04:25:24.482338 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-24 04:25:24.482361 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-24 04:25:44.003758 | orchestrator | 2026-03-24 04:25:44.003853 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-24 04:25:44.003865 | orchestrator | Tuesday 24 March 2026 04:25:24 +0000 (0:00:05.402) 0:06:27.810 ********* 2026-03-24 04:25:44.003875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.003942 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:44.003966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.003974 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:44.003982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.003988 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:25:44.003995 | orchestrator | 2026-03-24 04:25:44.004001 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-24 04:25:44.004008 | orchestrator | Tuesday 24 March 2026 04:25:26 +0000 (0:00:02.379) 0:06:30.189 ********* 2026-03-24 04:25:44.004015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 04:25:44.004025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 04:25:44.004033 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:44.004040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 04:25:44.004046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 04:25:44.004053 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:44.004059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 04:25:44.004066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-24 04:25:44.004090 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:25:44.004097 | orchestrator | 2026-03-24 04:25:44.004103 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-24 04:25:44.004110 | orchestrator | Tuesday 24 March 2026 04:25:29 +0000 (0:00:02.402) 0:06:32.592 ********* 2026-03-24 04:25:44.004116 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:25:44.004124 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:25:44.004130 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:25:44.004136 | orchestrator | 2026-03-24 04:25:44.004142 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-24 04:25:44.004148 | orchestrator | Tuesday 24 March 2026 04:25:33 +0000 (0:00:03.858) 0:06:36.450 ********* 2026-03-24 04:25:44.004155 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:25:44.004161 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:25:44.004179 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:25:44.004186 | orchestrator | 2026-03-24 04:25:44.004193 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-24 04:25:44.004199 | orchestrator | Tuesday 24 March 2026 04:25:37 +0000 (0:00:03.950) 0:06:40.401 ********* 2026-03-24 04:25:44.004206 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-24 04:25:44.004214 | orchestrator | 2026-03-24 04:25:44.004220 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-24 04:25:44.004227 | orchestrator | Tuesday 24 March 2026 04:25:38 +0000 (0:00:01.687) 0:06:42.088 ********* 2026-03-24 04:25:44.004237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.004245 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:44.004252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.004258 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:44.004265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.004271 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:25:44.004278 | orchestrator | 2026-03-24 04:25:44.004284 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-24 04:25:44.004291 | orchestrator | Tuesday 24 March 2026 04:25:41 +0000 (0:00:02.310) 0:06:44.398 ********* 2026-03-24 04:25:44.004297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.004309 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:25:44.004316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:25:44.004322 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:25:44.004333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-24 04:26:16.908614 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:16.908728 | orchestrator | 2026-03-24 04:26:16.908746 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-24 04:26:16.908854 | orchestrator | Tuesday 24 March 2026 04:25:43 +0000 (0:00:02.931) 0:06:47.329 ********* 2026-03-24 04:26:16.908871 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:16.908883 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:16.908894 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:16.908907 | orchestrator | 2026-03-24 04:26:16.908926 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-24 04:26:16.909058 | orchestrator | Tuesday 24 March 2026 04:25:46 +0000 (0:00:02.598) 0:06:49.928 ********* 2026-03-24 04:26:16.909080 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:26:16.909098 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:26:16.909116 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:26:16.909133 | orchestrator | 2026-03-24 04:26:16.909151 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-24 04:26:16.909171 | orchestrator | Tuesday 24 March 2026 04:25:49 +0000 (0:00:03.229) 0:06:53.157 ********* 2026-03-24 04:26:16.909191 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:26:16.909210 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:26:16.909251 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:26:16.909271 | orchestrator | 2026-03-24 04:26:16.909292 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-24 04:26:16.909309 | orchestrator | Tuesday 24 March 2026 04:25:53 +0000 (0:00:03.602) 0:06:56.760 ********* 2026-03-24 04:26:16.909328 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-24 04:26:16.909348 | orchestrator | 2026-03-24 04:26:16.909369 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-24 04:26:16.909388 | orchestrator | Tuesday 24 March 2026 04:25:55 +0000 (0:00:01.994) 0:06:58.755 ********* 2026-03-24 04:26:16.909412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 04:26:16.909467 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:16.909481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 04:26:16.909493 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:16.909504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 04:26:16.909516 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:16.909531 | orchestrator | 2026-03-24 04:26:16.909550 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-24 04:26:16.909563 | orchestrator | Tuesday 24 March 2026 04:25:57 +0000 (0:00:02.368) 0:07:01.124 ********* 2026-03-24 04:26:16.909574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 04:26:16.909585 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:16.909638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 04:26:16.909651 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:16.909662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-24 04:26:16.909693 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:16.909704 | orchestrator | 2026-03-24 04:26:16.909715 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-24 04:26:16.909731 | orchestrator | Tuesday 24 March 2026 04:26:00 +0000 (0:00:02.552) 0:07:03.676 ********* 2026-03-24 04:26:16.909743 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:16.909764 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:16.909775 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:16.909785 | orchestrator | 2026-03-24 04:26:16.909797 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-24 04:26:16.909807 | orchestrator | Tuesday 24 March 2026 04:26:02 +0000 (0:00:02.421) 0:07:06.098 ********* 2026-03-24 04:26:16.909818 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:26:16.909829 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:26:16.909839 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:26:16.909850 | orchestrator | 2026-03-24 04:26:16.909860 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-24 04:26:16.909871 | orchestrator | Tuesday 24 March 2026 04:26:06 +0000 (0:00:03.535) 0:07:09.633 ********* 2026-03-24 04:26:16.909882 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:26:16.909892 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:26:16.909903 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:26:16.909914 | orchestrator | 2026-03-24 04:26:16.909925 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-24 04:26:16.909935 | orchestrator | Tuesday 24 March 2026 04:26:10 +0000 (0:00:04.268) 0:07:13.902 ********* 2026-03-24 04:26:16.909978 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:26:16.909992 | orchestrator | 2026-03-24 04:26:16.910003 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-24 04:26:16.910014 | orchestrator | Tuesday 24 March 2026 04:26:13 +0000 (0:00:02.488) 0:07:16.391 ********* 2026-03-24 04:26:16.910086 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 04:26:16.910100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 04:26:16.910125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 04:26:18.113278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 04:26:18.113415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 04:26:18.113431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:26:18.113442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 04:26:18.113453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 04:26:18.113480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 04:26:18.113498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:26:18.113514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-24 04:26:18.113525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 04:26:18.113535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 04:26:18.113545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 04:26:18.113555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:26:18.113580 | orchestrator | 2026-03-24 04:26:18.113597 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-24 04:26:19.115245 | orchestrator | Tuesday 24 March 2026 04:26:18 +0000 (0:00:05.055) 0:07:21.447 ********* 2026-03-24 04:26:19.115327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 04:26:19.115339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 04:26:19.115346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 04:26:19.115352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 04:26:19.115358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:26:19.115364 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:19.115457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 04:26:19.115470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 04:26:19.115476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 04:26:19.115481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 04:26:19.115485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:26:19.115490 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:19.115495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-24 04:26:19.115510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-24 04:26:36.730125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-24 04:26:36.730247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-24 04:26:36.730265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-24 04:26:36.730277 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:36.730289 | orchestrator | 2026-03-24 04:26:36.730301 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-24 04:26:36.730312 | orchestrator | Tuesday 24 March 2026 04:26:20 +0000 (0:00:02.148) 0:07:23.595 ********* 2026-03-24 04:26:36.730324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 04:26:36.730336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 04:26:36.730348 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:36.730358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 04:26:36.730389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 04:26:36.730400 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:36.730410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 04:26:36.730420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-24 04:26:36.730430 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:36.730440 | orchestrator | 2026-03-24 04:26:36.730449 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-24 04:26:36.730459 | orchestrator | Tuesday 24 March 2026 04:26:22 +0000 (0:00:01.776) 0:07:25.371 ********* 2026-03-24 04:26:36.730469 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:26:36.730479 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:26:36.730489 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:26:36.730498 | orchestrator | 2026-03-24 04:26:36.730508 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-24 04:26:36.730517 | orchestrator | Tuesday 24 March 2026 04:26:24 +0000 (0:00:02.253) 0:07:27.625 ********* 2026-03-24 04:26:36.730529 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:26:36.730541 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:26:36.730570 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:26:36.730582 | orchestrator | 2026-03-24 04:26:36.730593 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-24 04:26:36.730605 | orchestrator | Tuesday 24 March 2026 04:26:27 +0000 (0:00:02.940) 0:07:30.565 ********* 2026-03-24 04:26:36.730623 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:26:36.730634 | orchestrator | 2026-03-24 04:26:36.730645 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-24 04:26:36.730656 | orchestrator | Tuesday 24 March 2026 04:26:29 +0000 (0:00:02.574) 0:07:33.139 ********* 2026-03-24 04:26:36.730669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:26:36.730686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:26:36.730706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:26:36.730727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:26:38.838740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:26:38.838819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:26:38.838843 | orchestrator | 2026-03-24 04:26:38.838852 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-24 04:26:38.838859 | orchestrator | Tuesday 24 March 2026 04:26:36 +0000 (0:00:06.920) 0:07:40.060 ********* 2026-03-24 04:26:38.838867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:26:38.838890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:26:38.838898 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:38.838905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:26:38.838916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:26:38.838925 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:38.838935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:26:38.838958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:26:49.904347 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:49.904474 | orchestrator | 2026-03-24 04:26:49.904501 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-24 04:26:49.904522 | orchestrator | Tuesday 24 March 2026 04:26:38 +0000 (0:00:02.106) 0:07:42.166 ********* 2026-03-24 04:26:49.904543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:26:49.904603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-24 04:26:49.904627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-24 04:26:49.904648 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:49.904666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:26:49.904685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-24 04:26:49.904699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-24 04:26:49.904710 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:49.904721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:26:49.904732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-24 04:26:49.904743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-24 04:26:49.904753 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:49.904764 | orchestrator | 2026-03-24 04:26:49.904775 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-24 04:26:49.904786 | orchestrator | Tuesday 24 March 2026 04:26:40 +0000 (0:00:01.732) 0:07:43.898 ********* 2026-03-24 04:26:49.904797 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:49.904808 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:49.904818 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:49.904829 | orchestrator | 2026-03-24 04:26:49.904841 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-24 04:26:49.904863 | orchestrator | Tuesday 24 March 2026 04:26:42 +0000 (0:00:01.496) 0:07:45.395 ********* 2026-03-24 04:26:49.904911 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:49.904930 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:49.904948 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:26:49.904967 | orchestrator | 2026-03-24 04:26:49.905020 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-24 04:26:49.905039 | orchestrator | Tuesday 24 March 2026 04:26:44 +0000 (0:00:02.732) 0:07:48.127 ********* 2026-03-24 04:26:49.905057 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:26:49.905091 | orchestrator | 2026-03-24 04:26:49.905110 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-24 04:26:49.905129 | orchestrator | Tuesday 24 March 2026 04:26:47 +0000 (0:00:02.406) 0:07:50.533 ********* 2026-03-24 04:26:49.905179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-24 04:26:49.905205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 04:26:49.905227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:49.905248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:49.905268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 04:26:49.905312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-24 04:26:51.847943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 04:26:51.848151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:51.848184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-24 04:26:51.848209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:51.848231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 04:26:51.848302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 04:26:51.848351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:51.848374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:51.848395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 04:26:51.848416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:26:51.848447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-24 04:26:51.848483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:51.848515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.746343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 04:26:53.746445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:26:53.746461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-24 04:26:53.746487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.746521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.746532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 04:26:53.746560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:26:53.746572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-24 04:26:53.746583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.746606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.746617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 04:26:53.746628 | orchestrator | 2026-03-24 04:26:53.746640 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-24 04:26:53.746650 | orchestrator | Tuesday 24 March 2026 04:26:53 +0000 (0:00:05.977) 0:07:56.511 ********* 2026-03-24 04:26:53.746668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-24 04:26:53.908243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 04:26:53.908347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.908364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.908402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 04:26:53.908432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:26:53.908466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-24 04:26:53.908480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-24 04:26:53.908502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 04:26:53.908519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.908531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.908543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.908554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:53.908574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 04:26:55.056162 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:26:55.057350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 04:26:55.057430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:26:55.057498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-24 04:26:55.057525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:55.057547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:55.057621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 04:26:55.057664 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:26:55.057696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-24 04:26:55.057727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-24 04:26:55.057746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:55.057758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:26:55.057771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-24 04:26:55.057794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:27:06.904780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-24 04:27:06.904919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:27:06.904968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:27:06.904989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-24 04:27:06.905088 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:06.905108 | orchestrator | 2026-03-24 04:27:06.905125 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-24 04:27:06.905138 | orchestrator | Tuesday 24 March 2026 04:26:55 +0000 (0:00:01.884) 0:07:58.396 ********* 2026-03-24 04:27:06.905149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-24 04:27:06.905163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-24 04:27:06.905177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:27:06.905238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:27:06.905251 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:06.905262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-24 04:27:06.905275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-24 04:27:06.905287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:27:06.905306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:27:06.905318 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:06.905330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-24 04:27:06.905342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-24 04:27:06.905354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:27:06.905461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-24 04:27:06.905486 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:06.905506 | orchestrator | 2026-03-24 04:27:06.905525 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-24 04:27:06.905542 | orchestrator | Tuesday 24 March 2026 04:26:56 +0000 (0:00:01.874) 0:08:00.270 ********* 2026-03-24 04:27:06.905558 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:06.905591 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:06.905609 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:06.905626 | orchestrator | 2026-03-24 04:27:06.905642 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-24 04:27:06.905658 | orchestrator | Tuesday 24 March 2026 04:26:58 +0000 (0:00:01.838) 0:08:02.109 ********* 2026-03-24 04:27:06.905668 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:06.905678 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:06.905688 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:06.905697 | orchestrator | 2026-03-24 04:27:06.905707 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-24 04:27:06.905716 | orchestrator | Tuesday 24 March 2026 04:27:00 +0000 (0:00:02.148) 0:08:04.257 ********* 2026-03-24 04:27:06.905726 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:27:06.905736 | orchestrator | 2026-03-24 04:27:06.905745 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-24 04:27:06.905755 | orchestrator | Tuesday 24 March 2026 04:27:03 +0000 (0:00:02.250) 0:08:06.508 ********* 2026-03-24 04:27:06.905779 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:27:24.363499 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:27:24.363607 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:27:24.363644 | orchestrator | 2026-03-24 04:27:24.363658 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-24 04:27:24.363669 | orchestrator | Tuesday 24 March 2026 04:27:06 +0000 (0:00:03.723) 0:08:10.232 ********* 2026-03-24 04:27:24.363681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:27:24.363692 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:24.363720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:27:24.363733 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:24.363749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:27:24.363761 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:24.363770 | orchestrator | 2026-03-24 04:27:24.363780 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-24 04:27:24.363790 | orchestrator | Tuesday 24 March 2026 04:27:08 +0000 (0:00:01.459) 0:08:11.692 ********* 2026-03-24 04:27:24.363801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-24 04:27:24.363819 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:24.363830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-24 04:27:24.363839 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:24.363849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-24 04:27:24.363858 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:24.363868 | orchestrator | 2026-03-24 04:27:24.363878 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-24 04:27:24.363887 | orchestrator | Tuesday 24 March 2026 04:27:09 +0000 (0:00:01.417) 0:08:13.109 ********* 2026-03-24 04:27:24.363897 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:24.363906 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:24.363916 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:24.363925 | orchestrator | 2026-03-24 04:27:24.363935 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-24 04:27:24.363944 | orchestrator | Tuesday 24 March 2026 04:27:11 +0000 (0:00:01.971) 0:08:15.081 ********* 2026-03-24 04:27:24.363954 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:24.363963 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:24.363973 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:24.363983 | orchestrator | 2026-03-24 04:27:24.363992 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-24 04:27:24.364002 | orchestrator | Tuesday 24 March 2026 04:27:13 +0000 (0:00:02.251) 0:08:17.332 ********* 2026-03-24 04:27:24.364011 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:27:24.364055 | orchestrator | 2026-03-24 04:27:24.364068 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-24 04:27:24.364079 | orchestrator | Tuesday 24 March 2026 04:27:16 +0000 (0:00:02.683) 0:08:20.015 ********* 2026-03-24 04:27:24.364091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-24 04:27:24.364119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-24 04:27:26.108890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-24 04:27:26.108969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-24 04:27:26.108978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-24 04:27:26.109010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-24 04:27:26.109058 | orchestrator | 2026-03-24 04:27:26.109072 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-24 04:27:26.109082 | orchestrator | Tuesday 24 March 2026 04:27:24 +0000 (0:00:07.683) 0:08:27.699 ********* 2026-03-24 04:27:26.109093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-24 04:27:26.109103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-24 04:27:26.109113 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:26.109122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-24 04:27:26.109140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-24 04:27:47.425924 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:47.426138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-24 04:27:47.426235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-24 04:27:47.426255 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:47.426264 | orchestrator | 2026-03-24 04:27:47.426275 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-24 04:27:47.426285 | orchestrator | Tuesday 24 March 2026 04:27:26 +0000 (0:00:01.745) 0:08:29.444 ********* 2026-03-24 04:27:47.426296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-24 04:27:47.426308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-24 04:27:47.426336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:27:47.426365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:27:47.426374 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:47.426384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-24 04:27:47.426393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-24 04:27:47.426419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:27:47.426429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:27:47.426438 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:47.426447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-24 04:27:47.426456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-24 04:27:47.426465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:27:47.426474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-24 04:27:47.426494 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:47.426503 | orchestrator | 2026-03-24 04:27:47.426512 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-24 04:27:47.426521 | orchestrator | Tuesday 24 March 2026 04:27:28 +0000 (0:00:02.020) 0:08:31.466 ********* 2026-03-24 04:27:47.426530 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:27:47.426538 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:27:47.426547 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:27:47.426556 | orchestrator | 2026-03-24 04:27:47.426564 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-24 04:27:47.426573 | orchestrator | Tuesday 24 March 2026 04:27:30 +0000 (0:00:02.344) 0:08:33.810 ********* 2026-03-24 04:27:47.426582 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:27:47.426590 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:27:47.426599 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:27:47.426614 | orchestrator | 2026-03-24 04:27:47.426623 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-24 04:27:47.426631 | orchestrator | Tuesday 24 March 2026 04:27:33 +0000 (0:00:02.958) 0:08:36.768 ********* 2026-03-24 04:27:47.426640 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:47.426649 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:47.426658 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:47.426667 | orchestrator | 2026-03-24 04:27:47.426676 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-24 04:27:47.426685 | orchestrator | Tuesday 24 March 2026 04:27:34 +0000 (0:00:01.337) 0:08:38.105 ********* 2026-03-24 04:27:47.426697 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:47.426713 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:47.426729 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:47.426743 | orchestrator | 2026-03-24 04:27:47.426757 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-24 04:27:47.426772 | orchestrator | Tuesday 24 March 2026 04:27:36 +0000 (0:00:01.476) 0:08:39.582 ********* 2026-03-24 04:27:47.426787 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:47.426801 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:47.426818 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:47.426834 | orchestrator | 2026-03-24 04:27:47.426849 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-24 04:27:47.426862 | orchestrator | Tuesday 24 March 2026 04:27:37 +0000 (0:00:01.713) 0:08:41.295 ********* 2026-03-24 04:27:47.426871 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:47.426880 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:47.426888 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:47.426897 | orchestrator | 2026-03-24 04:27:47.426906 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-24 04:27:47.426920 | orchestrator | Tuesday 24 March 2026 04:27:39 +0000 (0:00:01.337) 0:08:42.633 ********* 2026-03-24 04:27:47.426929 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:47.426937 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:27:47.426946 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:27:47.426954 | orchestrator | 2026-03-24 04:27:47.426963 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-24 04:27:47.426972 | orchestrator | Tuesday 24 March 2026 04:27:40 +0000 (0:00:01.389) 0:08:44.022 ********* 2026-03-24 04:27:47.426982 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:27:47.426998 | orchestrator | 2026-03-24 04:27:47.427013 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-24 04:27:47.427028 | orchestrator | Tuesday 24 March 2026 04:27:43 +0000 (0:00:02.802) 0:08:46.825 ********* 2026-03-24 04:27:47.427121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-24 04:27:51.632559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-24 04:27:51.632721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-24 04:27:51.632742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:27:51.632754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:27:51.632781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-24 04:27:51.632794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:27:51.632827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:27:51.632840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-24 04:27:51.632860 | orchestrator | 2026-03-24 04:27:51.632873 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-24 04:27:51.632886 | orchestrator | Tuesday 24 March 2026 04:27:47 +0000 (0:00:03.935) 0:08:50.760 ********* 2026-03-24 04:27:51.632898 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:27:51.632910 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:27:51.632922 | orchestrator | } 2026-03-24 04:27:51.632933 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:27:51.632943 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:27:51.632954 | orchestrator | } 2026-03-24 04:27:51.632964 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:27:51.632975 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:27:51.632986 | orchestrator | } 2026-03-24 04:27:51.632997 | orchestrator | 2026-03-24 04:27:51.633008 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:27:51.633019 | orchestrator | Tuesday 24 March 2026 04:27:48 +0000 (0:00:01.509) 0:08:52.269 ********* 2026-03-24 04:27:51.633031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-24 04:27:51.633043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:27:51.633129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:27:51.633145 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:27:51.633159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-24 04:27:51.633189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:29:53.819820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:29:53.819941 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:29:53.819959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-24 04:29:53.819973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-24 04:29:53.820003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-24 04:29:53.820015 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:29:53.820027 | orchestrator | 2026-03-24 04:29:53.820040 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-24 04:29:53.820052 | orchestrator | Tuesday 24 March 2026 04:27:51 +0000 (0:00:02.691) 0:08:54.961 ********* 2026-03-24 04:29:53.820063 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:29:53.820075 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:29:53.820086 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:29:53.820097 | orchestrator | 2026-03-24 04:29:53.820108 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-24 04:29:53.820119 | orchestrator | Tuesday 24 March 2026 04:27:53 +0000 (0:00:01.812) 0:08:56.774 ********* 2026-03-24 04:29:53.820154 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:29:53.820166 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:29:53.820176 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:29:53.820219 | orchestrator | 2026-03-24 04:29:53.820233 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-24 04:29:53.820244 | orchestrator | Tuesday 24 March 2026 04:27:54 +0000 (0:00:01.367) 0:08:58.142 ********* 2026-03-24 04:29:53.820255 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:29:53.820266 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:29:53.820277 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:29:53.820288 | orchestrator | 2026-03-24 04:29:53.820298 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-24 04:29:53.820309 | orchestrator | Tuesday 24 March 2026 04:28:01 +0000 (0:00:07.075) 0:09:05.217 ********* 2026-03-24 04:29:53.820320 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:29:53.820331 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:29:53.820341 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:29:53.820352 | orchestrator | 2026-03-24 04:29:53.820365 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-24 04:29:53.820377 | orchestrator | Tuesday 24 March 2026 04:28:09 +0000 (0:00:07.550) 0:09:12.768 ********* 2026-03-24 04:29:53.820389 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:29:53.820401 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:29:53.820412 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:29:53.820425 | orchestrator | 2026-03-24 04:29:53.820437 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-24 04:29:53.820449 | orchestrator | Tuesday 24 March 2026 04:28:16 +0000 (0:00:07.114) 0:09:19.883 ********* 2026-03-24 04:29:53.820461 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:29:53.820474 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:29:53.820487 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:29:53.820499 | orchestrator | 2026-03-24 04:29:53.820527 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-24 04:29:53.820539 | orchestrator | Tuesday 24 March 2026 04:28:24 +0000 (0:00:07.692) 0:09:27.575 ********* 2026-03-24 04:29:53.820550 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:29:53.820561 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:29:53.820571 | orchestrator | 2026-03-24 04:29:53.820582 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-24 04:29:53.820593 | orchestrator | Tuesday 24 March 2026 04:28:27 +0000 (0:00:03.668) 0:09:31.244 ********* 2026-03-24 04:29:53.820604 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:29:53.820614 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:29:53.820625 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:29:53.820636 | orchestrator | 2026-03-24 04:29:53.820647 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-24 04:29:53.820658 | orchestrator | Tuesday 24 March 2026 04:28:41 +0000 (0:00:13.590) 0:09:44.834 ********* 2026-03-24 04:29:53.820669 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:29:53.820680 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:29:53.820690 | orchestrator | 2026-03-24 04:29:53.820701 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-24 04:29:53.820711 | orchestrator | Tuesday 24 March 2026 04:28:46 +0000 (0:00:04.841) 0:09:49.676 ********* 2026-03-24 04:29:53.820722 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:29:53.820733 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:29:53.820743 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:29:53.820754 | orchestrator | 2026-03-24 04:29:53.820764 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-24 04:29:53.820775 | orchestrator | Tuesday 24 March 2026 04:28:53 +0000 (0:00:07.289) 0:09:56.966 ********* 2026-03-24 04:29:53.820793 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:29:53.820811 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:29:53.820830 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:29:53.820862 | orchestrator | 2026-03-24 04:29:53.820879 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-24 04:29:53.820897 | orchestrator | Tuesday 24 March 2026 04:29:00 +0000 (0:00:06.825) 0:10:03.792 ********* 2026-03-24 04:29:53.820916 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:29:53.820934 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:29:53.820952 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:29:53.820971 | orchestrator | 2026-03-24 04:29:53.820982 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-24 04:29:53.820993 | orchestrator | Tuesday 24 March 2026 04:29:07 +0000 (0:00:06.830) 0:10:10.622 ********* 2026-03-24 04:29:53.821003 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:29:53.821014 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:29:53.821025 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:29:53.821035 | orchestrator | 2026-03-24 04:29:53.821046 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-24 04:29:53.821057 | orchestrator | Tuesday 24 March 2026 04:29:14 +0000 (0:00:06.805) 0:10:17.428 ********* 2026-03-24 04:29:53.821068 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:29:53.821078 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:29:53.821089 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:29:53.821099 | orchestrator | 2026-03-24 04:29:53.821110 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-03-24 04:29:53.821121 | orchestrator | Tuesday 24 March 2026 04:29:21 +0000 (0:00:07.392) 0:10:24.821 ********* 2026-03-24 04:29:53.821132 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:29:53.821142 | orchestrator | 2026-03-24 04:29:53.821160 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-24 04:29:53.821172 | orchestrator | Tuesday 24 March 2026 04:29:25 +0000 (0:00:03.611) 0:10:28.433 ********* 2026-03-24 04:29:53.821267 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:29:53.821284 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:29:53.821295 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:29:53.821306 | orchestrator | 2026-03-24 04:29:53.821317 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-03-24 04:29:53.821328 | orchestrator | Tuesday 24 March 2026 04:29:37 +0000 (0:00:12.342) 0:10:40.776 ********* 2026-03-24 04:29:53.821338 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:29:53.821349 | orchestrator | 2026-03-24 04:29:53.821360 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-24 04:29:53.821370 | orchestrator | Tuesday 24 March 2026 04:29:42 +0000 (0:00:04.587) 0:10:45.363 ********* 2026-03-24 04:29:53.821381 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:29:53.821392 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:29:53.821403 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:29:53.821413 | orchestrator | 2026-03-24 04:29:53.821424 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-24 04:29:53.821435 | orchestrator | Tuesday 24 March 2026 04:29:48 +0000 (0:00:06.891) 0:10:52.255 ********* 2026-03-24 04:29:53.821446 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:29:53.821456 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:29:53.821467 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:29:53.821477 | orchestrator | 2026-03-24 04:29:53.821488 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-24 04:29:53.821499 | orchestrator | Tuesday 24 March 2026 04:29:50 +0000 (0:00:02.019) 0:10:54.275 ********* 2026-03-24 04:29:53.821512 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:29:53.821530 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:29:53.821549 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:29:53.821567 | orchestrator | 2026-03-24 04:29:53.821585 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:29:53.821598 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-24 04:29:53.821627 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-24 04:29:53.821649 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-24 04:29:54.809076 | orchestrator | 2026-03-24 04:29:54.809172 | orchestrator | 2026-03-24 04:29:54.809228 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:29:54.809243 | orchestrator | Tuesday 24 March 2026 04:29:53 +0000 (0:00:02.863) 0:10:57.138 ********* 2026-03-24 04:29:54.809254 | orchestrator | =============================================================================== 2026-03-24 04:29:54.809264 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.59s 2026-03-24 04:29:54.809274 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.34s 2026-03-24 04:29:54.809284 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.69s 2026-03-24 04:29:54.809294 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.68s 2026-03-24 04:29:54.809303 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.55s 2026-03-24 04:29:54.809313 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.39s 2026-03-24 04:29:54.809323 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.29s 2026-03-24 04:29:54.809333 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.11s 2026-03-24 04:29:54.809343 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.08s 2026-03-24 04:29:54.809353 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.93s 2026-03-24 04:29:54.809362 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.92s 2026-03-24 04:29:54.809372 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.89s 2026-03-24 04:29:54.809381 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.83s 2026-03-24 04:29:54.809391 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.83s 2026-03-24 04:29:54.809401 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.81s 2026-03-24 04:29:54.809411 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.98s 2026-03-24 04:29:54.809420 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.88s 2026-03-24 04:29:54.809430 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.60s 2026-03-24 04:29:54.809440 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.40s 2026-03-24 04:29:54.809449 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.29s 2026-03-24 04:29:55.143059 | orchestrator | + osism apply -a upgrade opensearch 2026-03-24 04:29:57.437306 | orchestrator | 2026-03-24 04:29:57 | INFO  | Task 700e4363-3341-4877-be98-f6ccaa94e894 (opensearch) was prepared for execution. 2026-03-24 04:29:57.437397 | orchestrator | 2026-03-24 04:29:57 | INFO  | It takes a moment until task 700e4363-3341-4877-be98-f6ccaa94e894 (opensearch) has been started and output is visible here. 2026-03-24 04:30:17.461668 | orchestrator | 2026-03-24 04:30:17.461758 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:30:17.461769 | orchestrator | 2026-03-24 04:30:17.461790 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:30:17.461798 | orchestrator | Tuesday 24 March 2026 04:30:04 +0000 (0:00:02.329) 0:00:02.329 ********* 2026-03-24 04:30:17.461805 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:30:17.461813 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:30:17.461820 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:30:17.461827 | orchestrator | 2026-03-24 04:30:17.461834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:30:17.461858 | orchestrator | Tuesday 24 March 2026 04:30:06 +0000 (0:00:02.357) 0:00:04.687 ********* 2026-03-24 04:30:17.461876 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-24 04:30:17.461884 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-24 04:30:17.461890 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-24 04:30:17.461897 | orchestrator | 2026-03-24 04:30:17.461904 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-24 04:30:17.461911 | orchestrator | 2026-03-24 04:30:17.461917 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 04:30:17.461924 | orchestrator | Tuesday 24 March 2026 04:30:08 +0000 (0:00:02.496) 0:00:07.183 ********* 2026-03-24 04:30:17.461931 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:30:17.461938 | orchestrator | 2026-03-24 04:30:17.461945 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-24 04:30:17.461951 | orchestrator | Tuesday 24 March 2026 04:30:11 +0000 (0:00:02.174) 0:00:09.358 ********* 2026-03-24 04:30:17.461958 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 04:30:17.461965 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 04:30:17.461972 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-24 04:30:17.461978 | orchestrator | 2026-03-24 04:30:17.461985 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-24 04:30:17.461991 | orchestrator | Tuesday 24 March 2026 04:30:13 +0000 (0:00:02.163) 0:00:11.522 ********* 2026-03-24 04:30:17.462000 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:17.462010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:17.462082 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:17.462098 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:17.462108 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:17.462116 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:17.462128 | orchestrator | 2026-03-24 04:30:17.462136 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 04:30:17.462143 | orchestrator | Tuesday 24 March 2026 04:30:15 +0000 (0:00:02.423) 0:00:13.945 ********* 2026-03-24 04:30:17.462149 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:30:17.462156 | orchestrator | 2026-03-24 04:30:17.462167 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-24 04:30:22.933844 | orchestrator | Tuesday 24 March 2026 04:30:17 +0000 (0:00:01.688) 0:00:15.633 ********* 2026-03-24 04:30:22.933931 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:22.933945 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:22.933953 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:22.933962 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:22.934004 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:22.934059 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:22.934070 | orchestrator | 2026-03-24 04:30:22.934078 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-24 04:30:22.934086 | orchestrator | Tuesday 24 March 2026 04:30:21 +0000 (0:00:03.609) 0:00:19.243 ********* 2026-03-24 04:30:22.934093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:30:22.934118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:30:24.756862 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:30:24.756968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:30:24.756989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:30:24.757002 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:30:24.757015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:30:24.757087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:30:24.757109 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:30:24.757138 | orchestrator | 2026-03-24 04:30:24.757160 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-24 04:30:24.757179 | orchestrator | Tuesday 24 March 2026 04:30:22 +0000 (0:00:01.868) 0:00:21.111 ********* 2026-03-24 04:30:24.757198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:30:24.757252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:30:24.757287 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:30:24.757304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:30:24.757335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:30:28.486862 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:30:28.486978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:30:28.487000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:30:28.487036 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:30:28.487049 | orchestrator | 2026-03-24 04:30:28.487062 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-24 04:30:28.487074 | orchestrator | Tuesday 24 March 2026 04:30:24 +0000 (0:00:01.821) 0:00:22.933 ********* 2026-03-24 04:30:28.487100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:28.487142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:28.487163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:28.487183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:28.487221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:28.487318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:42.395190 | orchestrator | 2026-03-24 04:30:42.395403 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-24 04:30:42.395425 | orchestrator | Tuesday 24 March 2026 04:30:28 +0000 (0:00:03.725) 0:00:26.659 ********* 2026-03-24 04:30:42.395438 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:30:42.395450 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:30:42.395461 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:30:42.395472 | orchestrator | 2026-03-24 04:30:42.395484 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-24 04:30:42.395495 | orchestrator | Tuesday 24 March 2026 04:30:32 +0000 (0:00:03.828) 0:00:30.487 ********* 2026-03-24 04:30:42.395506 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:30:42.395516 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:30:42.395527 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:30:42.395538 | orchestrator | 2026-03-24 04:30:42.395549 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-24 04:30:42.395560 | orchestrator | Tuesday 24 March 2026 04:30:35 +0000 (0:00:03.038) 0:00:33.525 ********* 2026-03-24 04:30:42.395597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:42.395612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:42.395639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-24 04:30:42.395673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:42.395696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:42.395715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-24 04:30:42.395729 | orchestrator | 2026-03-24 04:30:42.395742 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-24 04:30:42.395755 | orchestrator | Tuesday 24 March 2026 04:30:38 +0000 (0:00:03.656) 0:00:37.182 ********* 2026-03-24 04:30:42.395768 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:30:42.395781 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:30:42.395794 | orchestrator | } 2026-03-24 04:30:42.395806 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:30:42.395819 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:30:42.395830 | orchestrator | } 2026-03-24 04:30:42.395842 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:30:42.395854 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:30:42.395866 | orchestrator | } 2026-03-24 04:30:42.395878 | orchestrator | 2026-03-24 04:30:42.395890 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:30:42.395902 | orchestrator | Tuesday 24 March 2026 04:30:40 +0000 (0:00:01.367) 0:00:38.549 ********* 2026-03-24 04:30:42.395923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:33:55.879255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:33:55.879349 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:33:55.879359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:33:55.879378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:33:55.879384 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:33:55.879401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-24 04:33:55.879429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-24 04:33:55.879439 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:33:55.879446 | orchestrator | 2026-03-24 04:33:55.879502 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 04:33:55.879512 | orchestrator | Tuesday 24 March 2026 04:30:42 +0000 (0:00:02.021) 0:00:40.571 ********* 2026-03-24 04:33:55.879518 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:33:55.879525 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:33:55.879532 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:33:55.879539 | orchestrator | 2026-03-24 04:33:55.879546 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-24 04:33:55.879552 | orchestrator | Tuesday 24 March 2026 04:30:43 +0000 (0:00:01.576) 0:00:42.148 ********* 2026-03-24 04:33:55.879559 | orchestrator | 2026-03-24 04:33:55.879566 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-24 04:33:55.879573 | orchestrator | Tuesday 24 March 2026 04:30:44 +0000 (0:00:00.448) 0:00:42.596 ********* 2026-03-24 04:33:55.879579 | orchestrator | 2026-03-24 04:33:55.879586 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-24 04:33:55.879592 | orchestrator | Tuesday 24 March 2026 04:30:44 +0000 (0:00:00.448) 0:00:43.045 ********* 2026-03-24 04:33:55.879598 | orchestrator | 2026-03-24 04:33:55.879605 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-24 04:33:55.879612 | orchestrator | Tuesday 24 March 2026 04:30:45 +0000 (0:00:00.791) 0:00:43.836 ********* 2026-03-24 04:33:55.879619 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:33:55.879627 | orchestrator | 2026-03-24 04:33:55.879633 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-24 04:33:55.879646 | orchestrator | Tuesday 24 March 2026 04:30:49 +0000 (0:00:03.529) 0:00:47.365 ********* 2026-03-24 04:33:55.879653 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:33:55.879660 | orchestrator | 2026-03-24 04:33:55.879667 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-24 04:33:55.879674 | orchestrator | Tuesday 24 March 2026 04:30:58 +0000 (0:00:09.161) 0:00:56.527 ********* 2026-03-24 04:33:55.879681 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:33:55.879701 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:33:55.879709 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:33:55.879716 | orchestrator | 2026-03-24 04:33:55.879723 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-24 04:33:55.879730 | orchestrator | Tuesday 24 March 2026 04:32:12 +0000 (0:01:14.399) 0:02:10.927 ********* 2026-03-24 04:33:55.879737 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:33:55.879744 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:33:55.879751 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:33:55.879759 | orchestrator | 2026-03-24 04:33:55.879766 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-24 04:33:55.879773 | orchestrator | Tuesday 24 March 2026 04:33:46 +0000 (0:01:33.415) 0:03:44.342 ********* 2026-03-24 04:33:55.879781 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:33:55.879789 | orchestrator | 2026-03-24 04:33:55.879796 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-24 04:33:55.879803 | orchestrator | Tuesday 24 March 2026 04:33:47 +0000 (0:00:01.716) 0:03:46.058 ********* 2026-03-24 04:33:55.879812 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:33:55.879820 | orchestrator | 2026-03-24 04:33:55.879828 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-24 04:33:55.879835 | orchestrator | Tuesday 24 March 2026 04:33:51 +0000 (0:00:03.351) 0:03:49.409 ********* 2026-03-24 04:33:55.879843 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:33:55.879851 | orchestrator | 2026-03-24 04:33:55.879856 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-24 04:33:55.879861 | orchestrator | Tuesday 24 March 2026 04:33:54 +0000 (0:00:03.395) 0:03:52.804 ********* 2026-03-24 04:33:55.879866 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:33:55.879871 | orchestrator | 2026-03-24 04:33:55.879877 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-24 04:33:55.879891 | orchestrator | Tuesday 24 March 2026 04:33:55 +0000 (0:00:01.246) 0:03:54.051 ********* 2026-03-24 04:33:58.191637 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:33:58.191755 | orchestrator | 2026-03-24 04:33:58.191780 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:33:58.191801 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:33:58.191819 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 04:33:58.191836 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 04:33:58.191857 | orchestrator | 2026-03-24 04:33:58.191878 | orchestrator | 2026-03-24 04:33:58.191893 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:33:58.191909 | orchestrator | Tuesday 24 March 2026 04:33:57 +0000 (0:00:01.913) 0:03:55.964 ********* 2026-03-24 04:33:58.191952 | orchestrator | =============================================================================== 2026-03-24 04:33:58.192030 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 93.41s 2026-03-24 04:33:58.192048 | orchestrator | opensearch : Restart opensearch container ------------------------------ 74.40s 2026-03-24 04:33:58.192065 | orchestrator | opensearch : Perform a flush -------------------------------------------- 9.16s 2026-03-24 04:33:58.192083 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.83s 2026-03-24 04:33:58.192102 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.73s 2026-03-24 04:33:58.192119 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.66s 2026-03-24 04:33:58.192136 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.61s 2026-03-24 04:33:58.192176 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.53s 2026-03-24 04:33:58.192187 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.40s 2026-03-24 04:33:58.192198 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.35s 2026-03-24 04:33:58.192209 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.04s 2026-03-24 04:33:58.192220 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.50s 2026-03-24 04:33:58.192232 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.42s 2026-03-24 04:33:58.192243 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.36s 2026-03-24 04:33:58.192254 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.18s 2026-03-24 04:33:58.192265 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.16s 2026-03-24 04:33:58.192276 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.02s 2026-03-24 04:33:58.192287 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.91s 2026-03-24 04:33:58.192298 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.87s 2026-03-24 04:33:58.192322 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.82s 2026-03-24 04:33:58.527068 | orchestrator | + osism apply -a upgrade memcached 2026-03-24 04:34:00.689790 | orchestrator | 2026-03-24 04:34:00 | INFO  | Task 9f7d9c50-6ac5-4b90-b557-746cabe421f4 (memcached) was prepared for execution. 2026-03-24 04:34:00.689868 | orchestrator | 2026-03-24 04:34:00 | INFO  | It takes a moment until task 9f7d9c50-6ac5-4b90-b557-746cabe421f4 (memcached) has been started and output is visible here. 2026-03-24 04:34:33.603827 | orchestrator | 2026-03-24 04:34:33.603949 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:34:33.603967 | orchestrator | 2026-03-24 04:34:33.603980 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:34:33.603991 | orchestrator | Tuesday 24 March 2026 04:34:06 +0000 (0:00:01.437) 0:00:01.437 ********* 2026-03-24 04:34:33.604008 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:34:33.604028 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:34:33.604047 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:34:33.604066 | orchestrator | 2026-03-24 04:34:33.604086 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:34:33.604104 | orchestrator | Tuesday 24 March 2026 04:34:07 +0000 (0:00:01.702) 0:00:03.140 ********* 2026-03-24 04:34:33.604117 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-24 04:34:33.604128 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-24 04:34:33.604140 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-24 04:34:33.604151 | orchestrator | 2026-03-24 04:34:33.604162 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-24 04:34:33.604173 | orchestrator | 2026-03-24 04:34:33.604184 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-24 04:34:33.604195 | orchestrator | Tuesday 24 March 2026 04:34:10 +0000 (0:00:02.237) 0:00:05.377 ********* 2026-03-24 04:34:33.604207 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:34:33.604218 | orchestrator | 2026-03-24 04:34:33.604229 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-24 04:34:33.604241 | orchestrator | Tuesday 24 March 2026 04:34:12 +0000 (0:00:02.350) 0:00:07.727 ********* 2026-03-24 04:34:33.604252 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-24 04:34:33.604264 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-24 04:34:33.604275 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-24 04:34:33.604286 | orchestrator | 2026-03-24 04:34:33.604297 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-24 04:34:33.604336 | orchestrator | Tuesday 24 March 2026 04:34:14 +0000 (0:00:01.901) 0:00:09.629 ********* 2026-03-24 04:34:33.604347 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-24 04:34:33.604358 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-24 04:34:33.604369 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-24 04:34:33.604379 | orchestrator | 2026-03-24 04:34:33.604390 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-24 04:34:33.604401 | orchestrator | Tuesday 24 March 2026 04:34:17 +0000 (0:00:02.688) 0:00:12.318 ********* 2026-03-24 04:34:33.604416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 04:34:33.604431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 04:34:33.604477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-24 04:34:33.604491 | orchestrator | 2026-03-24 04:34:33.604537 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-24 04:34:33.604550 | orchestrator | Tuesday 24 March 2026 04:34:19 +0000 (0:00:02.230) 0:00:14.548 ********* 2026-03-24 04:34:33.604561 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:34:33.604572 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:34:33.604583 | orchestrator | } 2026-03-24 04:34:33.604594 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:34:33.604605 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:34:33.604616 | orchestrator | } 2026-03-24 04:34:33.604626 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:34:33.604637 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:34:33.604648 | orchestrator | } 2026-03-24 04:34:33.604659 | orchestrator | 2026-03-24 04:34:33.604670 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:34:33.604680 | orchestrator | Tuesday 24 March 2026 04:34:20 +0000 (0:00:01.352) 0:00:15.901 ********* 2026-03-24 04:34:33.604701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 04:34:33.604713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 04:34:33.604725 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:34:33.604736 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:34:33.604748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-24 04:34:33.604759 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:34:33.604770 | orchestrator | 2026-03-24 04:34:33.604781 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-24 04:34:33.604791 | orchestrator | Tuesday 24 March 2026 04:34:22 +0000 (0:00:01.991) 0:00:17.892 ********* 2026-03-24 04:34:33.604802 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:34:33.604813 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:34:33.604824 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:34:33.604835 | orchestrator | 2026-03-24 04:34:33.604845 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:34:33.604857 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:34:33.604876 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:34:33.604887 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:34:33.604898 | orchestrator | 2026-03-24 04:34:33.604909 | orchestrator | 2026-03-24 04:34:33.604920 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:34:33.604938 | orchestrator | Tuesday 24 March 2026 04:34:33 +0000 (0:00:10.848) 0:00:28.741 ********* 2026-03-24 04:34:33.937494 | orchestrator | =============================================================================== 2026-03-24 04:34:33.937636 | orchestrator | memcached : Restart memcached container -------------------------------- 10.85s 2026-03-24 04:34:33.937654 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.69s 2026-03-24 04:34:33.937668 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.35s 2026-03-24 04:34:33.937682 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.24s 2026-03-24 04:34:33.937695 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.23s 2026-03-24 04:34:33.937709 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.99s 2026-03-24 04:34:33.937722 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.90s 2026-03-24 04:34:33.937736 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.70s 2026-03-24 04:34:33.937750 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.35s 2026-03-24 04:34:34.245188 | orchestrator | + osism apply -a upgrade redis 2026-03-24 04:34:36.361773 | orchestrator | 2026-03-24 04:34:36 | INFO  | Task 2bbd14aa-d14c-4367-972b-b4d7d1dd0ac1 (redis) was prepared for execution. 2026-03-24 04:34:36.361863 | orchestrator | 2026-03-24 04:34:36 | INFO  | It takes a moment until task 2bbd14aa-d14c-4367-972b-b4d7d1dd0ac1 (redis) has been started and output is visible here. 2026-03-24 04:34:47.452120 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-24 04:34:47.452268 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-24 04:34:47.452318 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-24 04:34:47.452338 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-24 04:34:47.452364 | orchestrator | 2026-03-24 04:34:47.452377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:34:47.452387 | orchestrator | 2026-03-24 04:34:47.452399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:34:47.452410 | orchestrator | Tuesday 24 March 2026 04:34:41 +0000 (0:00:00.971) 0:00:00.971 ********* 2026-03-24 04:34:47.452421 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:34:47.452433 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:34:47.452444 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:34:47.452455 | orchestrator | 2026-03-24 04:34:47.452466 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:34:47.452477 | orchestrator | Tuesday 24 March 2026 04:34:42 +0000 (0:00:00.759) 0:00:01.730 ********* 2026-03-24 04:34:47.452488 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-24 04:34:47.452499 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-24 04:34:47.452511 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-24 04:34:47.452598 | orchestrator | 2026-03-24 04:34:47.452609 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-24 04:34:47.452620 | orchestrator | 2026-03-24 04:34:47.452631 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-24 04:34:47.452642 | orchestrator | Tuesday 24 March 2026 04:34:43 +0000 (0:00:00.827) 0:00:02.558 ********* 2026-03-24 04:34:47.452659 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:34:47.452687 | orchestrator | 2026-03-24 04:34:47.452712 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-24 04:34:47.452732 | orchestrator | Tuesday 24 March 2026 04:34:43 +0000 (0:00:00.915) 0:00:03.473 ********* 2026-03-24 04:34:47.452755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.452837 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.452864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.452887 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.452941 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.452970 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.452990 | orchestrator | 2026-03-24 04:34:47.453010 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-24 04:34:47.453045 | orchestrator | Tuesday 24 March 2026 04:34:45 +0000 (0:00:01.293) 0:00:04.767 ********* 2026-03-24 04:34:47.453066 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.453151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.453199 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.453220 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:47.453255 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370186 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370320 | orchestrator | 2026-03-24 04:34:52.370338 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-24 04:34:52.370351 | orchestrator | Tuesday 24 March 2026 04:34:47 +0000 (0:00:02.211) 0:00:06.978 ********* 2026-03-24 04:34:52.370363 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370391 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370423 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370435 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370447 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370490 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370573 | orchestrator | 2026-03-24 04:34:52.370587 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-24 04:34:52.370599 | orchestrator | Tuesday 24 March 2026 04:34:50 +0000 (0:00:02.829) 0:00:09.807 ********* 2026-03-24 04:34:52.370611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:34:52.370769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-24 04:35:15.604725 | orchestrator | 2026-03-24 04:35:15.604839 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-24 04:35:15.604857 | orchestrator | Tuesday 24 March 2026 04:34:52 +0000 (0:00:02.091) 0:00:11.899 ********* 2026-03-24 04:35:15.604871 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:35:15.604884 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:35:15.604895 | orchestrator | } 2026-03-24 04:35:15.604906 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:35:15.604917 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:35:15.604928 | orchestrator | } 2026-03-24 04:35:15.604939 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:35:15.604950 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:35:15.604979 | orchestrator | } 2026-03-24 04:35:15.605001 | orchestrator | 2026-03-24 04:35:15.605013 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:35:15.605024 | orchestrator | Tuesday 24 March 2026 04:34:52 +0000 (0:00:00.581) 0:00:12.480 ********* 2026-03-24 04:35:15.605037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-24 04:35:15.605069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-24 04:35:15.605082 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-24 04:35:15.605093 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-24 04:35:15.605116 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:35:15.605128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-24 04:35:15.605141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-24 04:35:15.605175 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:35:15.605204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-24 04:35:15.605217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-24 04:35:15.605229 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:35:15.605240 | orchestrator | 2026-03-24 04:35:15.605251 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-24 04:35:15.605264 | orchestrator | Tuesday 24 March 2026 04:34:53 +0000 (0:00:01.038) 0:00:13.519 ********* 2026-03-24 04:35:15.605277 | orchestrator | 2026-03-24 04:35:15.605288 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-24 04:35:15.605300 | orchestrator | Tuesday 24 March 2026 04:34:54 +0000 (0:00:00.080) 0:00:13.599 ********* 2026-03-24 04:35:15.605312 | orchestrator | 2026-03-24 04:35:15.605324 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-24 04:35:15.605339 | orchestrator | Tuesday 24 March 2026 04:34:54 +0000 (0:00:00.072) 0:00:13.672 ********* 2026-03-24 04:35:15.605357 | orchestrator | 2026-03-24 04:35:15.605375 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-24 04:35:15.605428 | orchestrator | Tuesday 24 March 2026 04:34:54 +0000 (0:00:00.073) 0:00:13.745 ********* 2026-03-24 04:35:15.605448 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:35:15.605467 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:35:15.605490 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:35:15.605519 | orchestrator | 2026-03-24 04:35:15.605537 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-24 04:35:15.605554 | orchestrator | Tuesday 24 March 2026 04:35:04 +0000 (0:00:09.962) 0:00:23.708 ********* 2026-03-24 04:35:15.605572 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:35:15.605590 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:35:15.605607 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:35:15.605626 | orchestrator | 2026-03-24 04:35:15.605646 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:35:15.605665 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:35:15.605683 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:35:15.605708 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:35:15.605719 | orchestrator | 2026-03-24 04:35:15.605730 | orchestrator | 2026-03-24 04:35:15.605741 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:35:15.605752 | orchestrator | Tuesday 24 March 2026 04:35:15 +0000 (0:00:10.921) 0:00:34.629 ********* 2026-03-24 04:35:15.605762 | orchestrator | =============================================================================== 2026-03-24 04:35:15.605773 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.92s 2026-03-24 04:35:15.605783 | orchestrator | redis : Restart redis container ----------------------------------------- 9.96s 2026-03-24 04:35:15.605794 | orchestrator | redis : Copying over redis config files --------------------------------- 2.83s 2026-03-24 04:35:15.605805 | orchestrator | redis : Copying over default config.json files -------------------------- 2.21s 2026-03-24 04:35:15.605815 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.09s 2026-03-24 04:35:15.605826 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.29s 2026-03-24 04:35:15.605836 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.04s 2026-03-24 04:35:15.605847 | orchestrator | redis : include_tasks --------------------------------------------------- 0.92s 2026-03-24 04:35:15.605858 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-03-24 04:35:15.605868 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.76s 2026-03-24 04:35:15.605879 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.58s 2026-03-24 04:35:15.605890 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-03-24 04:35:15.959456 | orchestrator | + osism apply -a upgrade mariadb 2026-03-24 04:35:17.968307 | orchestrator | 2026-03-24 04:35:17 | INFO  | Task 7abf9bb0-8abc-4c94-87c2-27ca560479ff (mariadb) was prepared for execution. 2026-03-24 04:35:17.968484 | orchestrator | 2026-03-24 04:35:17 | INFO  | It takes a moment until task 7abf9bb0-8abc-4c94-87c2-27ca560479ff (mariadb) has been started and output is visible here. 2026-03-24 04:35:42.662193 | orchestrator | 2026-03-24 04:35:42.662351 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:35:42.662370 | orchestrator | 2026-03-24 04:35:42.662382 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:35:42.662394 | orchestrator | Tuesday 24 March 2026 04:35:23 +0000 (0:00:01.471) 0:00:01.471 ********* 2026-03-24 04:35:42.662405 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:35:42.662417 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:35:42.662428 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:35:42.662438 | orchestrator | 2026-03-24 04:35:42.662450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:35:42.662460 | orchestrator | Tuesday 24 March 2026 04:35:25 +0000 (0:00:01.883) 0:00:03.355 ********* 2026-03-24 04:35:42.662472 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-24 04:35:42.662483 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-24 04:35:42.662494 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-24 04:35:42.662505 | orchestrator | 2026-03-24 04:35:42.662516 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-24 04:35:42.662527 | orchestrator | 2026-03-24 04:35:42.662538 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-24 04:35:42.662549 | orchestrator | Tuesday 24 March 2026 04:35:27 +0000 (0:00:01.708) 0:00:05.063 ********* 2026-03-24 04:35:42.662560 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:35:42.662571 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 04:35:42.662582 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 04:35:42.662593 | orchestrator | 2026-03-24 04:35:42.662629 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 04:35:42.662641 | orchestrator | Tuesday 24 March 2026 04:35:28 +0000 (0:00:01.455) 0:00:06.519 ********* 2026-03-24 04:35:42.662652 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:35:42.662664 | orchestrator | 2026-03-24 04:35:42.662676 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-24 04:35:42.662697 | orchestrator | Tuesday 24 March 2026 04:35:30 +0000 (0:00:01.933) 0:00:08.452 ********* 2026-03-24 04:35:42.662714 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:35:42.662754 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:35:42.662784 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:35:42.662798 | orchestrator | 2026-03-24 04:35:42.662811 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-24 04:35:42.662824 | orchestrator | Tuesday 24 March 2026 04:35:34 +0000 (0:00:04.050) 0:00:12.503 ********* 2026-03-24 04:35:42.662836 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:35:42.662850 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:35:42.662862 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:35:42.662874 | orchestrator | 2026-03-24 04:35:42.662887 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-24 04:35:42.662899 | orchestrator | Tuesday 24 March 2026 04:35:36 +0000 (0:00:01.577) 0:00:14.080 ********* 2026-03-24 04:35:42.662912 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:35:42.662924 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:35:42.662936 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:35:42.662948 | orchestrator | 2026-03-24 04:35:42.662960 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-24 04:35:42.662973 | orchestrator | Tuesday 24 March 2026 04:35:38 +0000 (0:00:02.159) 0:00:16.240 ********* 2026-03-24 04:35:42.662993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:35:53.930887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:35:53.931011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:35:53.931056 | orchestrator | 2026-03-24 04:35:53.931071 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-24 04:35:53.931085 | orchestrator | Tuesday 24 March 2026 04:35:42 +0000 (0:00:04.114) 0:00:20.354 ********* 2026-03-24 04:35:53.931096 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:35:53.931109 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:35:53.931120 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:35:53.931131 | orchestrator | 2026-03-24 04:35:53.931143 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-24 04:35:53.931172 | orchestrator | Tuesday 24 March 2026 04:35:44 +0000 (0:00:02.010) 0:00:22.364 ********* 2026-03-24 04:35:53.931183 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:35:53.931194 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:35:53.931206 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:35:53.931327 | orchestrator | 2026-03-24 04:35:53.931361 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 04:35:53.931381 | orchestrator | Tuesday 24 March 2026 04:35:49 +0000 (0:00:04.503) 0:00:26.868 ********* 2026-03-24 04:35:53.931402 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:35:53.931422 | orchestrator | 2026-03-24 04:35:53.931442 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-24 04:35:53.931460 | orchestrator | Tuesday 24 March 2026 04:35:50 +0000 (0:00:01.618) 0:00:28.487 ********* 2026-03-24 04:35:53.931483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:35:53.931506 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:35:53.931569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:00.852488 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:00.852603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:00.852621 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:00.852632 | orchestrator | 2026-03-24 04:36:00.852644 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-24 04:36:00.852655 | orchestrator | Tuesday 24 March 2026 04:35:53 +0000 (0:00:03.132) 0:00:31.620 ********* 2026-03-24 04:36:00.852695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:00.852707 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:00.852750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:00.852763 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:00.852773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:00.852793 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:00.852803 | orchestrator | 2026-03-24 04:36:00.852813 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-24 04:36:00.852823 | orchestrator | Tuesday 24 March 2026 04:35:57 +0000 (0:00:03.168) 0:00:34.788 ********* 2026-03-24 04:36:00.852847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:04.812540 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:04.812659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:04.812707 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:04.812736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:04.812750 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:04.812761 | orchestrator | 2026-03-24 04:36:04.812774 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-24 04:36:04.812787 | orchestrator | Tuesday 24 March 2026 04:36:00 +0000 (0:00:03.760) 0:00:38.548 ********* 2026-03-24 04:36:04.812819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:36:04.812847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:36:04.812871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-24 04:36:19.937546 | orchestrator | 2026-03-24 04:36:19.937686 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-24 04:36:19.937705 | orchestrator | Tuesday 24 March 2026 04:36:04 +0000 (0:00:03.960) 0:00:42.509 ********* 2026-03-24 04:36:19.937719 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:36:19.937731 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:36:19.937743 | orchestrator | } 2026-03-24 04:36:19.937755 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:36:19.937766 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:36:19.937777 | orchestrator | } 2026-03-24 04:36:19.937788 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:36:19.937799 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:36:19.937810 | orchestrator | } 2026-03-24 04:36:19.937821 | orchestrator | 2026-03-24 04:36:19.937832 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:36:19.937843 | orchestrator | Tuesday 24 March 2026 04:36:06 +0000 (0:00:01.399) 0:00:43.909 ********* 2026-03-24 04:36:19.937875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:19.937915 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.937950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:19.937963 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:19.937981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:19.938001 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:19.938012 | orchestrator | 2026-03-24 04:36:19.938153 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-24 04:36:19.938179 | orchestrator | Tuesday 24 March 2026 04:36:10 +0000 (0:00:03.928) 0:00:47.837 ********* 2026-03-24 04:36:19.938199 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.938219 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:19.938236 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:19.938250 | orchestrator | 2026-03-24 04:36:19.938263 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-24 04:36:19.938275 | orchestrator | Tuesday 24 March 2026 04:36:11 +0000 (0:00:01.390) 0:00:49.228 ********* 2026-03-24 04:36:19.938288 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.938300 | orchestrator | 2026-03-24 04:36:19.938312 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-24 04:36:19.938324 | orchestrator | Tuesday 24 March 2026 04:36:12 +0000 (0:00:01.105) 0:00:50.333 ********* 2026-03-24 04:36:19.938336 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.938348 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:19.938360 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:19.938373 | orchestrator | 2026-03-24 04:36:19.938385 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-24 04:36:19.938397 | orchestrator | Tuesday 24 March 2026 04:36:14 +0000 (0:00:01.439) 0:00:51.773 ********* 2026-03-24 04:36:19.938409 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.938421 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:19.938433 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:19.938445 | orchestrator | 2026-03-24 04:36:19.938459 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-24 04:36:19.938471 | orchestrator | Tuesday 24 March 2026 04:36:15 +0000 (0:00:01.687) 0:00:53.460 ********* 2026-03-24 04:36:19.938482 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.938492 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:19.938503 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:19.938514 | orchestrator | 2026-03-24 04:36:19.938524 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-24 04:36:19.938535 | orchestrator | Tuesday 24 March 2026 04:36:17 +0000 (0:00:01.373) 0:00:54.833 ********* 2026-03-24 04:36:19.938546 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.938557 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:19.938567 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:19.938578 | orchestrator | 2026-03-24 04:36:19.938588 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-24 04:36:19.938599 | orchestrator | Tuesday 24 March 2026 04:36:18 +0000 (0:00:01.357) 0:00:56.191 ********* 2026-03-24 04:36:19.938610 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:19.938621 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:19.938631 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:19.938642 | orchestrator | 2026-03-24 04:36:19.938663 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-24 04:36:37.715317 | orchestrator | Tuesday 24 March 2026 04:36:19 +0000 (0:00:01.434) 0:00:57.626 ********* 2026-03-24 04:36:37.715441 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.715460 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.715472 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.715484 | orchestrator | 2026-03-24 04:36:37.715497 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-24 04:36:37.715509 | orchestrator | Tuesday 24 March 2026 04:36:21 +0000 (0:00:01.634) 0:00:59.261 ********* 2026-03-24 04:36:37.715520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 04:36:37.715532 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 04:36:37.715544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 04:36:37.715579 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.715591 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 04:36:37.715603 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 04:36:37.715614 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 04:36:37.715626 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.715637 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 04:36:37.715649 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 04:36:37.715661 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 04:36:37.715672 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.715684 | orchestrator | 2026-03-24 04:36:37.715696 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-24 04:36:37.715724 | orchestrator | Tuesday 24 March 2026 04:36:23 +0000 (0:00:01.462) 0:01:00.724 ********* 2026-03-24 04:36:37.715736 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.715748 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.715759 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.715770 | orchestrator | 2026-03-24 04:36:37.715782 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-24 04:36:37.715793 | orchestrator | Tuesday 24 March 2026 04:36:24 +0000 (0:00:01.363) 0:01:02.088 ********* 2026-03-24 04:36:37.715805 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.715818 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.715829 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.715840 | orchestrator | 2026-03-24 04:36:37.715853 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-24 04:36:37.715867 | orchestrator | Tuesday 24 March 2026 04:36:25 +0000 (0:00:01.416) 0:01:03.504 ********* 2026-03-24 04:36:37.715879 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.715891 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.715906 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.715920 | orchestrator | 2026-03-24 04:36:37.715935 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-24 04:36:37.715950 | orchestrator | Tuesday 24 March 2026 04:36:27 +0000 (0:00:01.341) 0:01:04.846 ********* 2026-03-24 04:36:37.715964 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.715978 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.715993 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.716007 | orchestrator | 2026-03-24 04:36:37.716022 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-24 04:36:37.716105 | orchestrator | Tuesday 24 March 2026 04:36:28 +0000 (0:00:01.374) 0:01:06.220 ********* 2026-03-24 04:36:37.716120 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.716133 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.716146 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.716160 | orchestrator | 2026-03-24 04:36:37.716173 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-24 04:36:37.716186 | orchestrator | Tuesday 24 March 2026 04:36:29 +0000 (0:00:01.334) 0:01:07.555 ********* 2026-03-24 04:36:37.716199 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.716212 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.716225 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.716238 | orchestrator | 2026-03-24 04:36:37.716252 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-24 04:36:37.716264 | orchestrator | Tuesday 24 March 2026 04:36:31 +0000 (0:00:01.601) 0:01:09.157 ********* 2026-03-24 04:36:37.716278 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.716290 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.716302 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.716314 | orchestrator | 2026-03-24 04:36:37.716325 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-24 04:36:37.716351 | orchestrator | Tuesday 24 March 2026 04:36:32 +0000 (0:00:01.347) 0:01:10.504 ********* 2026-03-24 04:36:37.716364 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.716375 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.716387 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:37.716398 | orchestrator | 2026-03-24 04:36:37.716410 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-24 04:36:37.716421 | orchestrator | Tuesday 24 March 2026 04:36:34 +0000 (0:00:01.391) 0:01:11.896 ********* 2026-03-24 04:36:37.716463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:37.716499 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:37.716512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:37.716532 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:37.716553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:54.495694 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.495781 | orchestrator | 2026-03-24 04:36:54.495789 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-24 04:36:54.495796 | orchestrator | Tuesday 24 March 2026 04:36:37 +0000 (0:00:03.509) 0:01:15.406 ********* 2026-03-24 04:36:54.495802 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:54.495807 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:54.495812 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.495817 | orchestrator | 2026-03-24 04:36:54.495834 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-24 04:36:54.495840 | orchestrator | Tuesday 24 March 2026 04:36:39 +0000 (0:00:01.594) 0:01:17.000 ********* 2026-03-24 04:36:54.495848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:54.495871 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:54.495887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:54.495893 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:54.495902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-24 04:36:54.495911 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.495916 | orchestrator | 2026-03-24 04:36:54.495921 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-24 04:36:54.495926 | orchestrator | Tuesday 24 March 2026 04:36:42 +0000 (0:00:03.378) 0:01:20.379 ********* 2026-03-24 04:36:54.495931 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:54.495935 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:54.495940 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.495945 | orchestrator | 2026-03-24 04:36:54.495950 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-24 04:36:54.495954 | orchestrator | Tuesday 24 March 2026 04:36:44 +0000 (0:00:01.738) 0:01:22.117 ********* 2026-03-24 04:36:54.495959 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:54.495998 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:54.496003 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.496008 | orchestrator | 2026-03-24 04:36:54.496013 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-24 04:36:54.496019 | orchestrator | Tuesday 24 March 2026 04:36:45 +0000 (0:00:01.516) 0:01:23.634 ********* 2026-03-24 04:36:54.496024 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:54.496028 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:54.496033 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.496038 | orchestrator | 2026-03-24 04:36:54.496043 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-24 04:36:54.496048 | orchestrator | Tuesday 24 March 2026 04:36:47 +0000 (0:00:01.363) 0:01:24.998 ********* 2026-03-24 04:36:54.496053 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:54.496057 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:54.496062 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.496067 | orchestrator | 2026-03-24 04:36:54.496072 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-24 04:36:54.496076 | orchestrator | Tuesday 24 March 2026 04:36:48 +0000 (0:00:01.708) 0:01:26.707 ********* 2026-03-24 04:36:54.496083 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:36:54.496091 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:36:54.496098 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:36:54.496105 | orchestrator | 2026-03-24 04:36:54.496118 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-24 04:36:54.496127 | orchestrator | Tuesday 24 March 2026 04:36:50 +0000 (0:00:01.916) 0:01:28.623 ********* 2026-03-24 04:36:54.496134 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:36:54.496143 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:36:54.496150 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:36:54.496158 | orchestrator | 2026-03-24 04:36:54.496165 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-24 04:36:54.496172 | orchestrator | Tuesday 24 March 2026 04:36:52 +0000 (0:00:01.942) 0:01:30.566 ********* 2026-03-24 04:36:54.496180 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:36:54.496187 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:36:54.496195 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:36:54.496202 | orchestrator | 2026-03-24 04:36:54.496209 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-24 04:36:54.496216 | orchestrator | Tuesday 24 March 2026 04:36:54 +0000 (0:00:01.407) 0:01:31.973 ********* 2026-03-24 04:36:54.496230 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.665835 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.665958 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.665974 | orchestrator | 2026-03-24 04:39:31.665991 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-24 04:39:31.666070 | orchestrator | Tuesday 24 March 2026 04:36:55 +0000 (0:00:01.391) 0:01:33.365 ********* 2026-03-24 04:39:31.666091 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.666104 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.666136 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.666151 | orchestrator | 2026-03-24 04:39:31.666168 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-24 04:39:31.666182 | orchestrator | Tuesday 24 March 2026 04:36:57 +0000 (0:00:02.083) 0:01:35.448 ********* 2026-03-24 04:39:31.666198 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.666213 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.666229 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.666244 | orchestrator | 2026-03-24 04:39:31.666260 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-24 04:39:31.666269 | orchestrator | Tuesday 24 March 2026 04:36:59 +0000 (0:00:01.416) 0:01:36.865 ********* 2026-03-24 04:39:31.666278 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:39:31.666288 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.666297 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.666306 | orchestrator | 2026-03-24 04:39:31.666314 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-24 04:39:31.666323 | orchestrator | Tuesday 24 March 2026 04:37:00 +0000 (0:00:01.428) 0:01:38.293 ********* 2026-03-24 04:39:31.666332 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.666340 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.666349 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.666357 | orchestrator | 2026-03-24 04:39:31.666366 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-24 04:39:31.666376 | orchestrator | Tuesday 24 March 2026 04:37:04 +0000 (0:00:03.745) 0:01:42.039 ********* 2026-03-24 04:39:31.666385 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.666396 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.666406 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.666415 | orchestrator | 2026-03-24 04:39:31.666592 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-24 04:39:31.666607 | orchestrator | Tuesday 24 March 2026 04:37:06 +0000 (0:00:01.688) 0:01:43.727 ********* 2026-03-24 04:39:31.666617 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.666627 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.666637 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.666646 | orchestrator | 2026-03-24 04:39:31.666657 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-24 04:39:31.666667 | orchestrator | Tuesday 24 March 2026 04:37:07 +0000 (0:00:01.353) 0:01:45.081 ********* 2026-03-24 04:39:31.666677 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:39:31.666686 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.666696 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.666706 | orchestrator | 2026-03-24 04:39:31.666717 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 04:39:31.666727 | orchestrator | Tuesday 24 March 2026 04:37:09 +0000 (0:00:01.756) 0:01:46.838 ********* 2026-03-24 04:39:31.666735 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:39:31.666744 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.666753 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.666761 | orchestrator | 2026-03-24 04:39:31.666770 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-24 04:39:31.666779 | orchestrator | Tuesday 24 March 2026 04:37:10 +0000 (0:00:01.526) 0:01:48.364 ********* 2026-03-24 04:39:31.666787 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:39:31.666796 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.666804 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.666813 | orchestrator | 2026-03-24 04:39:31.666822 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-24 04:39:31.666844 | orchestrator | Tuesday 24 March 2026 04:37:12 +0000 (0:00:01.530) 0:01:49.894 ********* 2026-03-24 04:39:31.666853 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:39:31.666862 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:39:31.666870 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:39:31.666878 | orchestrator | 2026-03-24 04:39:31.666887 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-24 04:39:31.666896 | orchestrator | Tuesday 24 March 2026 04:37:13 +0000 (0:00:01.673) 0:01:51.568 ********* 2026-03-24 04:39:31.666904 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:39:31.666913 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.666921 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.666930 | orchestrator | 2026-03-24 04:39:31.666938 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-24 04:39:31.666947 | orchestrator | 2026-03-24 04:39:31.666955 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-24 04:39:31.666964 | orchestrator | Tuesday 24 March 2026 04:37:15 +0000 (0:00:01.967) 0:01:53.536 ********* 2026-03-24 04:39:31.666972 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:39:31.666981 | orchestrator | 2026-03-24 04:39:31.666989 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-24 04:39:31.666998 | orchestrator | Tuesday 24 March 2026 04:37:42 +0000 (0:00:26.823) 0:02:20.359 ********* 2026-03-24 04:39:31.667007 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.667015 | orchestrator | 2026-03-24 04:39:31.667024 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-24 04:39:31.667032 | orchestrator | Tuesday 24 March 2026 04:37:47 +0000 (0:00:04.652) 0:02:25.012 ********* 2026-03-24 04:39:31.667041 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.667050 | orchestrator | 2026-03-24 04:39:31.667058 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-24 04:39:31.667067 | orchestrator | 2026-03-24 04:39:31.667075 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-24 04:39:31.667084 | orchestrator | Tuesday 24 March 2026 04:37:50 +0000 (0:00:02.975) 0:02:27.988 ********* 2026-03-24 04:39:31.667092 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:39:31.667101 | orchestrator | 2026-03-24 04:39:31.667110 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-24 04:39:31.667137 | orchestrator | Tuesday 24 March 2026 04:38:15 +0000 (0:00:25.665) 0:02:53.654 ********* 2026-03-24 04:39:31.667146 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2026-03-24 04:39:31.667155 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.667164 | orchestrator | 2026-03-24 04:39:31.667173 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-24 04:39:31.667188 | orchestrator | Tuesday 24 March 2026 04:38:24 +0000 (0:00:08.142) 0:03:01.796 ********* 2026-03-24 04:39:31.667198 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.667206 | orchestrator | 2026-03-24 04:39:31.667215 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-24 04:39:31.667223 | orchestrator | 2026-03-24 04:39:31.667232 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-24 04:39:31.667241 | orchestrator | Tuesday 24 March 2026 04:38:27 +0000 (0:00:03.484) 0:03:05.281 ********* 2026-03-24 04:39:31.667249 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:39:31.667258 | orchestrator | 2026-03-24 04:39:31.667266 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-24 04:39:31.667275 | orchestrator | Tuesday 24 March 2026 04:38:52 +0000 (0:00:24.493) 0:03:29.774 ********* 2026-03-24 04:39:31.667284 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.667292 | orchestrator | 2026-03-24 04:39:31.667301 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-24 04:39:31.667309 | orchestrator | Tuesday 24 March 2026 04:38:57 +0000 (0:00:05.294) 0:03:35.069 ********* 2026-03-24 04:39:31.667324 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-24 04:39:31.667332 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-24 04:39:31.667341 | orchestrator | mariadb_bootstrap_restart 2026-03-24 04:39:31.667350 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.667358 | orchestrator | 2026-03-24 04:39:31.667367 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-24 04:39:31.667376 | orchestrator | skipping: no hosts matched 2026-03-24 04:39:31.667384 | orchestrator | 2026-03-24 04:39:31.667393 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-24 04:39:31.667401 | orchestrator | skipping: no hosts matched 2026-03-24 04:39:31.667410 | orchestrator | 2026-03-24 04:39:31.667438 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-24 04:39:31.667448 | orchestrator | 2026-03-24 04:39:31.667456 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-24 04:39:31.667465 | orchestrator | Tuesday 24 March 2026 04:39:01 +0000 (0:00:04.162) 0:03:39.231 ********* 2026-03-24 04:39:31.667473 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:39:31.667482 | orchestrator | 2026-03-24 04:39:31.667491 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-24 04:39:31.667499 | orchestrator | Tuesday 24 March 2026 04:39:03 +0000 (0:00:01.887) 0:03:41.118 ********* 2026-03-24 04:39:31.667508 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.667517 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.667525 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.667534 | orchestrator | 2026-03-24 04:39:31.667542 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-24 04:39:31.667565 | orchestrator | Tuesday 24 March 2026 04:39:06 +0000 (0:00:03.189) 0:03:44.308 ********* 2026-03-24 04:39:31.667574 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.667592 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.667601 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:39:31.667609 | orchestrator | 2026-03-24 04:39:31.667618 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-24 04:39:31.667627 | orchestrator | Tuesday 24 March 2026 04:39:09 +0000 (0:00:03.250) 0:03:47.558 ********* 2026-03-24 04:39:31.667635 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.667644 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.667653 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.667661 | orchestrator | 2026-03-24 04:39:31.667670 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-24 04:39:31.667679 | orchestrator | Tuesday 24 March 2026 04:39:13 +0000 (0:00:03.209) 0:03:50.767 ********* 2026-03-24 04:39:31.667687 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.667696 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.667704 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:39:31.667713 | orchestrator | 2026-03-24 04:39:31.667722 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-24 04:39:31.667730 | orchestrator | Tuesday 24 March 2026 04:39:16 +0000 (0:00:03.509) 0:03:54.277 ********* 2026-03-24 04:39:31.667739 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.667747 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.667756 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.667765 | orchestrator | 2026-03-24 04:39:31.667773 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-24 04:39:31.667782 | orchestrator | Tuesday 24 March 2026 04:39:22 +0000 (0:00:06.365) 0:04:00.643 ********* 2026-03-24 04:39:31.667790 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:39:31.667799 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.667808 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.667816 | orchestrator | 2026-03-24 04:39:31.667825 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-24 04:39:31.667839 | orchestrator | Tuesday 24 March 2026 04:39:26 +0000 (0:00:03.543) 0:04:04.187 ********* 2026-03-24 04:39:31.667848 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:39:31.667857 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:39:31.667865 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:39:31.667874 | orchestrator | 2026-03-24 04:39:31.667882 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-24 04:39:31.667891 | orchestrator | Tuesday 24 March 2026 04:39:28 +0000 (0:00:01.544) 0:04:05.731 ********* 2026-03-24 04:39:31.667900 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:39:31.667908 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:39:31.667917 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:39:31.667926 | orchestrator | 2026-03-24 04:39:31.667934 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-24 04:39:31.667948 | orchestrator | Tuesday 24 March 2026 04:39:31 +0000 (0:00:03.624) 0:04:09.355 ********* 2026-03-24 04:39:52.235088 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:39:52.235193 | orchestrator | 2026-03-24 04:39:52.235207 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-24 04:39:52.235217 | orchestrator | Tuesday 24 March 2026 04:39:33 +0000 (0:00:01.981) 0:04:11.337 ********* 2026-03-24 04:39:52.235242 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:39:52.235253 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:39:52.235261 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:39:52.235270 | orchestrator | 2026-03-24 04:39:52.235279 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:39:52.235289 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-24 04:39:52.235299 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-24 04:39:52.235308 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-24 04:39:52.235316 | orchestrator | 2026-03-24 04:39:52.235325 | orchestrator | 2026-03-24 04:39:52.235334 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:39:52.235343 | orchestrator | Tuesday 24 March 2026 04:39:51 +0000 (0:00:18.170) 0:04:29.507 ********* 2026-03-24 04:39:52.235410 | orchestrator | =============================================================================== 2026-03-24 04:39:52.235420 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 76.98s 2026-03-24 04:39:52.235429 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.17s 2026-03-24 04:39:52.235438 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 18.09s 2026-03-24 04:39:52.235447 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.62s 2026-03-24 04:39:52.235455 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.36s 2026-03-24 04:39:52.235464 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.50s 2026-03-24 04:39:52.235472 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.11s 2026-03-24 04:39:52.235481 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.05s 2026-03-24 04:39:52.235489 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.96s 2026-03-24 04:39:52.235498 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.93s 2026-03-24 04:39:52.235506 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.76s 2026-03-24 04:39:52.235515 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.75s 2026-03-24 04:39:52.235524 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.62s 2026-03-24 04:39:52.235555 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.54s 2026-03-24 04:39:52.235564 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.51s 2026-03-24 04:39:52.235572 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.51s 2026-03-24 04:39:52.235581 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.38s 2026-03-24 04:39:52.235589 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.25s 2026-03-24 04:39:52.235598 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 3.21s 2026-03-24 04:39:52.235606 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.19s 2026-03-24 04:39:52.565238 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-24 04:39:54.617941 | orchestrator | 2026-03-24 04:39:54 | INFO  | Task a4c3aaa3-6fd5-408a-bd2b-df4a95f1b54b (rabbitmq) was prepared for execution. 2026-03-24 04:39:54.618156 | orchestrator | 2026-03-24 04:39:54 | INFO  | It takes a moment until task a4c3aaa3-6fd5-408a-bd2b-df4a95f1b54b (rabbitmq) has been started and output is visible here. 2026-03-24 04:40:23.642351 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-24 04:40:23.642444 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-24 04:40:23.642462 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-24 04:40:23.642468 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-24 04:40:23.642481 | orchestrator | 2026-03-24 04:40:23.642489 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:40:23.642495 | orchestrator | 2026-03-24 04:40:23.642502 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:40:23.642509 | orchestrator | Tuesday 24 March 2026 04:40:00 +0000 (0:00:01.204) 0:00:01.204 ********* 2026-03-24 04:40:23.642515 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:40:23.642522 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:40:23.642528 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:40:23.642535 | orchestrator | 2026-03-24 04:40:23.642541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:40:23.642547 | orchestrator | Tuesday 24 March 2026 04:40:00 +0000 (0:00:00.897) 0:00:02.102 ********* 2026-03-24 04:40:23.642554 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-24 04:40:23.642560 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-24 04:40:23.642566 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-24 04:40:23.642572 | orchestrator | 2026-03-24 04:40:23.642591 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-24 04:40:23.642598 | orchestrator | 2026-03-24 04:40:23.642604 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-24 04:40:23.642610 | orchestrator | Tuesday 24 March 2026 04:40:02 +0000 (0:00:01.102) 0:00:03.205 ********* 2026-03-24 04:40:23.642617 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:40:23.642624 | orchestrator | 2026-03-24 04:40:23.642630 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-24 04:40:23.642637 | orchestrator | Tuesday 24 March 2026 04:40:03 +0000 (0:00:01.028) 0:00:04.233 ********* 2026-03-24 04:40:23.642643 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:40:23.642649 | orchestrator | 2026-03-24 04:40:23.642656 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-24 04:40:23.642662 | orchestrator | Tuesday 24 March 2026 04:40:04 +0000 (0:00:01.353) 0:00:05.586 ********* 2026-03-24 04:40:23.642668 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:40:23.642691 | orchestrator | 2026-03-24 04:40:23.642698 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-24 04:40:23.642704 | orchestrator | Tuesday 24 March 2026 04:40:06 +0000 (0:00:02.038) 0:00:07.624 ********* 2026-03-24 04:40:23.642710 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:40:23.642717 | orchestrator | 2026-03-24 04:40:23.642723 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-24 04:40:23.642729 | orchestrator | Tuesday 24 March 2026 04:40:15 +0000 (0:00:08.731) 0:00:16.356 ********* 2026-03-24 04:40:23.642736 | orchestrator | ok: [testbed-node-0] => { 2026-03-24 04:40:23.642742 | orchestrator |  "changed": false, 2026-03-24 04:40:23.642748 | orchestrator |  "msg": "All assertions passed" 2026-03-24 04:40:23.642755 | orchestrator | } 2026-03-24 04:40:23.642761 | orchestrator | 2026-03-24 04:40:23.642767 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-24 04:40:23.642774 | orchestrator | Tuesday 24 March 2026 04:40:15 +0000 (0:00:00.336) 0:00:16.692 ********* 2026-03-24 04:40:23.642780 | orchestrator | ok: [testbed-node-0] => { 2026-03-24 04:40:23.642786 | orchestrator |  "changed": false, 2026-03-24 04:40:23.642792 | orchestrator |  "msg": "All assertions passed" 2026-03-24 04:40:23.642798 | orchestrator | } 2026-03-24 04:40:23.642804 | orchestrator | 2026-03-24 04:40:23.642811 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-24 04:40:23.642817 | orchestrator | Tuesday 24 March 2026 04:40:16 +0000 (0:00:00.657) 0:00:17.350 ********* 2026-03-24 04:40:23.642823 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:40:23.642829 | orchestrator | 2026-03-24 04:40:23.642835 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-24 04:40:23.642841 | orchestrator | Tuesday 24 March 2026 04:40:17 +0000 (0:00:00.872) 0:00:18.222 ********* 2026-03-24 04:40:23.642848 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:40:23.642854 | orchestrator | 2026-03-24 04:40:23.642860 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-24 04:40:23.642866 | orchestrator | Tuesday 24 March 2026 04:40:18 +0000 (0:00:01.220) 0:00:19.443 ********* 2026-03-24 04:40:23.642872 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:40:23.642878 | orchestrator | 2026-03-24 04:40:23.642886 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-24 04:40:23.642893 | orchestrator | Tuesday 24 March 2026 04:40:20 +0000 (0:00:01.977) 0:00:21.420 ********* 2026-03-24 04:40:23.642900 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:40:23.642907 | orchestrator | 2026-03-24 04:40:23.642914 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-24 04:40:23.642921 | orchestrator | Tuesday 24 March 2026 04:40:21 +0000 (0:00:01.144) 0:00:22.565 ********* 2026-03-24 04:40:23.642946 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:23.642961 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:23.642976 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:23.642984 | orchestrator | 2026-03-24 04:40:23.642990 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-24 04:40:23.642997 | orchestrator | Tuesday 24 March 2026 04:40:22 +0000 (0:00:00.781) 0:00:23.346 ********* 2026-03-24 04:40:23.643008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:34.830553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:34.830741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:34.830767 | orchestrator | 2026-03-24 04:40:34.830786 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-24 04:40:34.830803 | orchestrator | Tuesday 24 March 2026 04:40:23 +0000 (0:00:01.407) 0:00:24.753 ********* 2026-03-24 04:40:34.830819 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-24 04:40:34.830836 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-24 04:40:34.830852 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-24 04:40:34.830867 | orchestrator | 2026-03-24 04:40:34.830883 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-24 04:40:34.830899 | orchestrator | Tuesday 24 March 2026 04:40:25 +0000 (0:00:01.411) 0:00:26.164 ********* 2026-03-24 04:40:34.830915 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-24 04:40:34.830931 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-24 04:40:34.830946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-24 04:40:34.830962 | orchestrator | 2026-03-24 04:40:34.830977 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-24 04:40:34.830993 | orchestrator | Tuesday 24 March 2026 04:40:27 +0000 (0:00:01.958) 0:00:28.123 ********* 2026-03-24 04:40:34.831008 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-24 04:40:34.831024 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-24 04:40:34.831055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-24 04:40:34.831072 | orchestrator | 2026-03-24 04:40:34.831089 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-24 04:40:34.831106 | orchestrator | Tuesday 24 March 2026 04:40:28 +0000 (0:00:01.290) 0:00:29.414 ********* 2026-03-24 04:40:34.831122 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-24 04:40:34.831139 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-24 04:40:34.831156 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-24 04:40:34.831183 | orchestrator | 2026-03-24 04:40:34.831200 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-24 04:40:34.831302 | orchestrator | Tuesday 24 March 2026 04:40:29 +0000 (0:00:01.301) 0:00:30.715 ********* 2026-03-24 04:40:34.831325 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-24 04:40:34.831343 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-24 04:40:34.831361 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-24 04:40:34.831378 | orchestrator | 2026-03-24 04:40:34.831395 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-24 04:40:34.831412 | orchestrator | Tuesday 24 March 2026 04:40:30 +0000 (0:00:01.248) 0:00:31.964 ********* 2026-03-24 04:40:34.831429 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-24 04:40:34.831447 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-24 04:40:34.831464 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-24 04:40:34.831481 | orchestrator | 2026-03-24 04:40:34.831499 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-24 04:40:34.831515 | orchestrator | Tuesday 24 March 2026 04:40:32 +0000 (0:00:01.554) 0:00:33.518 ********* 2026-03-24 04:40:34.831532 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:40:34.831548 | orchestrator | 2026-03-24 04:40:34.831564 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-24 04:40:34.831580 | orchestrator | Tuesday 24 March 2026 04:40:33 +0000 (0:00:00.937) 0:00:34.455 ********* 2026-03-24 04:40:34.831609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:34.831629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:34.831672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:40:40.228430 | orchestrator | 2026-03-24 04:40:40.228566 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-24 04:40:40.228593 | orchestrator | Tuesday 24 March 2026 04:40:34 +0000 (0:00:01.479) 0:00:35.935 ********* 2026-03-24 04:40:40.228633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:40:40.228649 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:40:40.228661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:40:40.228672 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:40:40.228683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:40:40.228716 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:40:40.228727 | orchestrator | 2026-03-24 04:40:40.228737 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-24 04:40:40.228747 | orchestrator | Tuesday 24 March 2026 04:40:35 +0000 (0:00:00.430) 0:00:36.365 ********* 2026-03-24 04:40:40.228777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:40:40.228788 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:40:40.228803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:40:40.228814 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:40:40.228825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:40:40.228843 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:40:40.228853 | orchestrator | 2026-03-24 04:40:40.228863 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-24 04:40:40.228873 | orchestrator | Tuesday 24 March 2026 04:40:36 +0000 (0:00:00.965) 0:00:37.330 ********* 2026-03-24 04:40:40.228882 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:40:40.228893 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:40:40.228903 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:40:40.228912 | orchestrator | 2026-03-24 04:40:40.228922 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-24 04:40:40.228932 | orchestrator | Tuesday 24 March 2026 04:40:39 +0000 (0:00:02.794) 0:00:40.124 ********* 2026-03-24 04:40:40.228950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:41:35.712495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:41:35.712639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-24 04:41:35.712697 | orchestrator | 2026-03-24 04:41:35.712722 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-24 04:41:35.712742 | orchestrator | Tuesday 24 March 2026 04:40:40 +0000 (0:00:01.220) 0:00:41.345 ********* 2026-03-24 04:41:35.712763 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:41:35.712782 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:41:35.712801 | orchestrator | } 2026-03-24 04:41:35.712820 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:41:35.712838 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:41:35.712856 | orchestrator | } 2026-03-24 04:41:35.712873 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:41:35.712892 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:41:35.712910 | orchestrator | } 2026-03-24 04:41:35.712928 | orchestrator | 2026-03-24 04:41:35.712947 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:41:35.712965 | orchestrator | Tuesday 24 March 2026 04:40:40 +0000 (0:00:00.403) 0:00:41.748 ********* 2026-03-24 04:41:35.712985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:41:35.713032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:41:35.713053 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-24 04:41:35.713099 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-24 04:41:35.713137 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:41:35.713157 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:41:35.713190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-24 04:41:35.713210 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:41:35.713229 | orchestrator | 2026-03-24 04:41:35.713248 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-24 04:41:35.713267 | orchestrator | Tuesday 24 March 2026 04:40:41 +0000 (0:00:01.236) 0:00:42.984 ********* 2026-03-24 04:41:35.713286 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:41:35.713304 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:41:35.713321 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:41:35.713340 | orchestrator | 2026-03-24 04:41:35.713358 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-24 04:41:35.713377 | orchestrator | 2026-03-24 04:41:35.713395 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-24 04:41:35.713413 | orchestrator | Tuesday 24 March 2026 04:40:42 +0000 (0:00:00.987) 0:00:43.971 ********* 2026-03-24 04:41:35.713430 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:41:35.713449 | orchestrator | 2026-03-24 04:41:35.713467 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-24 04:41:35.713484 | orchestrator | Tuesday 24 March 2026 04:40:44 +0000 (0:00:01.170) 0:00:45.142 ********* 2026-03-24 04:41:35.713502 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:41:35.713518 | orchestrator | 2026-03-24 04:41:35.713536 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-24 04:41:35.713553 | orchestrator | Tuesday 24 March 2026 04:40:54 +0000 (0:00:10.219) 0:00:55.361 ********* 2026-03-24 04:41:35.713571 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:41:35.713587 | orchestrator | 2026-03-24 04:41:35.713605 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-24 04:41:35.713623 | orchestrator | Tuesday 24 March 2026 04:41:02 +0000 (0:00:08.087) 0:01:03.449 ********* 2026-03-24 04:41:35.713640 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:41:35.713658 | orchestrator | 2026-03-24 04:41:35.713677 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-24 04:41:35.713694 | orchestrator | 2026-03-24 04:41:35.713713 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-24 04:41:35.713732 | orchestrator | Tuesday 24 March 2026 04:41:12 +0000 (0:00:10.330) 0:01:13.779 ********* 2026-03-24 04:41:35.713749 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:41:35.713767 | orchestrator | 2026-03-24 04:41:35.713785 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-24 04:41:35.713803 | orchestrator | Tuesday 24 March 2026 04:41:13 +0000 (0:00:01.025) 0:01:14.804 ********* 2026-03-24 04:41:35.713821 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:41:35.713839 | orchestrator | 2026-03-24 04:41:35.713857 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-24 04:41:35.713874 | orchestrator | Tuesday 24 March 2026 04:41:22 +0000 (0:00:08.670) 0:01:23.475 ********* 2026-03-24 04:41:35.713908 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:42:22.647612 | orchestrator | 2026-03-24 04:42:22.647733 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-24 04:42:22.647751 | orchestrator | Tuesday 24 March 2026 04:41:35 +0000 (0:00:13.346) 0:01:36.821 ********* 2026-03-24 04:42:22.647764 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:42:22.647776 | orchestrator | 2026-03-24 04:42:22.647787 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-24 04:42:22.647798 | orchestrator | 2026-03-24 04:42:22.647810 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-24 04:42:22.647821 | orchestrator | Tuesday 24 March 2026 04:41:45 +0000 (0:00:09.553) 0:01:46.374 ********* 2026-03-24 04:42:22.647878 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:42:22.647892 | orchestrator | 2026-03-24 04:42:22.647908 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-24 04:42:22.647919 | orchestrator | Tuesday 24 March 2026 04:41:46 +0000 (0:00:01.230) 0:01:47.605 ********* 2026-03-24 04:42:22.647930 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:42:22.647960 | orchestrator | 2026-03-24 04:42:22.648080 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-24 04:42:22.648093 | orchestrator | Tuesday 24 March 2026 04:41:55 +0000 (0:00:08.864) 0:01:56.470 ********* 2026-03-24 04:42:22.648103 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:42:22.648114 | orchestrator | 2026-03-24 04:42:22.648125 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-24 04:42:22.648136 | orchestrator | Tuesday 24 March 2026 04:42:08 +0000 (0:00:13.349) 0:02:09.820 ********* 2026-03-24 04:42:22.648147 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:42:22.648159 | orchestrator | 2026-03-24 04:42:22.648171 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-24 04:42:22.648183 | orchestrator | 2026-03-24 04:42:22.648196 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-24 04:42:22.648208 | orchestrator | Tuesday 24 March 2026 04:42:17 +0000 (0:00:09.170) 0:02:18.990 ********* 2026-03-24 04:42:22.648221 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:42:22.648233 | orchestrator | 2026-03-24 04:42:22.648245 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-24 04:42:22.648258 | orchestrator | Tuesday 24 March 2026 04:42:18 +0000 (0:00:00.558) 0:02:19.549 ********* 2026-03-24 04:42:22.648270 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:42:22.648283 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:42:22.648295 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:42:22.648306 | orchestrator | 2026-03-24 04:42:22.648317 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:42:22.648329 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 04:42:22.648341 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 04:42:22.648352 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-24 04:42:22.648363 | orchestrator | 2026-03-24 04:42:22.648374 | orchestrator | 2026-03-24 04:42:22.648385 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:42:22.648396 | orchestrator | Tuesday 24 March 2026 04:42:22 +0000 (0:00:03.825) 0:02:23.374 ********* 2026-03-24 04:42:22.648407 | orchestrator | =============================================================================== 2026-03-24 04:42:22.648417 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 34.78s 2026-03-24 04:42:22.648428 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.05s 2026-03-24 04:42:22.648439 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 27.75s 2026-03-24 04:42:22.648475 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 8.73s 2026-03-24 04:42:22.648486 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.83s 2026-03-24 04:42:22.648497 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.43s 2026-03-24 04:42:22.648507 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.79s 2026-03-24 04:42:22.648518 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.04s 2026-03-24 04:42:22.648529 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.98s 2026-03-24 04:42:22.648539 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.96s 2026-03-24 04:42:22.648550 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.55s 2026-03-24 04:42:22.648561 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.48s 2026-03-24 04:42:22.648571 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.41s 2026-03-24 04:42:22.648582 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.41s 2026-03-24 04:42:22.648592 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.35s 2026-03-24 04:42:22.648603 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.30s 2026-03-24 04:42:22.648613 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.29s 2026-03-24 04:42:22.648624 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.25s 2026-03-24 04:42:22.648634 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.24s 2026-03-24 04:42:22.648645 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.22s 2026-03-24 04:42:22.967251 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-24 04:42:24.961070 | orchestrator | 2026-03-24 04:42:24 | INFO  | Task 56520d46-36fd-4d04-98d4-5b8f23fc38c4 (openvswitch) was prepared for execution. 2026-03-24 04:42:24.961189 | orchestrator | 2026-03-24 04:42:24 | INFO  | It takes a moment until task 56520d46-36fd-4d04-98d4-5b8f23fc38c4 (openvswitch) has been started and output is visible here. 2026-03-24 04:42:39.976382 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-24 04:42:39.976501 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-24 04:42:39.976548 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-24 04:42:39.976559 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-24 04:42:39.976582 | orchestrator | 2026-03-24 04:42:39.976595 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:42:39.976606 | orchestrator | 2026-03-24 04:42:39.976617 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:42:39.976628 | orchestrator | Tuesday 24 March 2026 04:42:29 +0000 (0:00:00.812) 0:00:00.812 ********* 2026-03-24 04:42:39.976639 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:42:39.976682 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:42:39.976694 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:42:39.976705 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:42:39.976716 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:42:39.976727 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:42:39.976738 | orchestrator | 2026-03-24 04:42:39.976749 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:42:39.976759 | orchestrator | Tuesday 24 March 2026 04:42:30 +0000 (0:00:01.460) 0:00:02.272 ********* 2026-03-24 04:42:39.976771 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 04:42:39.976801 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 04:42:39.976813 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 04:42:39.976823 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 04:42:39.976834 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 04:42:39.976845 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-24 04:42:39.976856 | orchestrator | 2026-03-24 04:42:39.976866 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-24 04:42:39.976877 | orchestrator | 2026-03-24 04:42:39.976888 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-24 04:42:39.976898 | orchestrator | Tuesday 24 March 2026 04:42:31 +0000 (0:00:00.930) 0:00:03.203 ********* 2026-03-24 04:42:39.976913 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:42:39.976928 | orchestrator | 2026-03-24 04:42:39.976961 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-24 04:42:39.976973 | orchestrator | Tuesday 24 March 2026 04:42:33 +0000 (0:00:01.571) 0:00:04.774 ********* 2026-03-24 04:42:39.976986 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-24 04:42:39.976999 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-24 04:42:39.977011 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-24 04:42:39.977023 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-24 04:42:39.977035 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-24 04:42:39.977048 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-24 04:42:39.977060 | orchestrator | 2026-03-24 04:42:39.977073 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-24 04:42:39.977085 | orchestrator | Tuesday 24 March 2026 04:42:34 +0000 (0:00:01.217) 0:00:05.991 ********* 2026-03-24 04:42:39.977098 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-24 04:42:39.977110 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-24 04:42:39.977123 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-24 04:42:39.977136 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-24 04:42:39.977146 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-24 04:42:39.977157 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-24 04:42:39.977168 | orchestrator | 2026-03-24 04:42:39.977179 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-24 04:42:39.977190 | orchestrator | Tuesday 24 March 2026 04:42:35 +0000 (0:00:01.406) 0:00:07.397 ********* 2026-03-24 04:42:39.977201 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-24 04:42:39.977212 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:42:39.977223 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-24 04:42:39.977233 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:42:39.977244 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-24 04:42:39.977255 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:42:39.977266 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-24 04:42:39.977277 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:42:39.977288 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-24 04:42:39.977298 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:42:39.977309 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-24 04:42:39.977320 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:42:39.977330 | orchestrator | 2026-03-24 04:42:39.977341 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-24 04:42:39.977352 | orchestrator | Tuesday 24 March 2026 04:42:37 +0000 (0:00:01.555) 0:00:08.953 ********* 2026-03-24 04:42:39.977371 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:42:39.977382 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:42:39.977393 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:42:39.977403 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:42:39.977414 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:42:39.977442 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:42:39.977453 | orchestrator | 2026-03-24 04:42:39.977464 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-24 04:42:39.977476 | orchestrator | Tuesday 24 March 2026 04:42:38 +0000 (0:00:00.944) 0:00:09.897 ********* 2026-03-24 04:42:39.977496 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:39.977515 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:39.977526 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:39.977538 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:39.977549 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:39.977582 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262091 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262197 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262214 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262228 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262239 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262312 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262328 | orchestrator | 2026-03-24 04:42:42.262342 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-24 04:42:42.262355 | orchestrator | Tuesday 24 March 2026 04:42:39 +0000 (0:00:01.510) 0:00:11.408 ********* 2026-03-24 04:42:42.262368 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262381 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262393 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262404 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262429 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:42.262450 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953269 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953367 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953379 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953411 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953433 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953458 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953467 | orchestrator | 2026-03-24 04:42:45.953477 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-24 04:42:45.953487 | orchestrator | Tuesday 24 March 2026 04:42:42 +0000 (0:00:02.390) 0:00:13.798 ********* 2026-03-24 04:42:45.953495 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:42:45.953504 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:42:45.953512 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:42:45.953520 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:42:45.953528 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:42:45.953536 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:42:45.953544 | orchestrator | 2026-03-24 04:42:45.953552 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-24 04:42:45.953560 | orchestrator | Tuesday 24 March 2026 04:42:43 +0000 (0:00:01.402) 0:00:15.201 ********* 2026-03-24 04:42:45.953568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:45.953622 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336577 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-24 04:42:47.336772 | orchestrator | 2026-03-24 04:42:47.336782 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-24 04:42:47.336790 | orchestrator | Tuesday 24 March 2026 04:42:46 +0000 (0:00:02.303) 0:00:17.505 ********* 2026-03-24 04:42:47.336799 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:42:47.336808 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:42:47.336816 | orchestrator | } 2026-03-24 04:42:47.336829 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:42:47.336842 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:42:47.336853 | orchestrator | } 2026-03-24 04:42:47.336865 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:42:47.336878 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:42:47.336892 | orchestrator | } 2026-03-24 04:42:47.336906 | orchestrator | changed: [testbed-node-3] => { 2026-03-24 04:42:47.337027 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:42:47.337043 | orchestrator | } 2026-03-24 04:42:47.337055 | orchestrator | changed: [testbed-node-4] => { 2026-03-24 04:42:47.337067 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:42:47.337079 | orchestrator | } 2026-03-24 04:42:47.337091 | orchestrator | changed: [testbed-node-5] => { 2026-03-24 04:42:47.337104 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:42:47.337117 | orchestrator | } 2026-03-24 04:42:47.337129 | orchestrator | 2026-03-24 04:42:47.337142 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:42:47.337155 | orchestrator | Tuesday 24 March 2026 04:42:47 +0000 (0:00:00.957) 0:00:18.462 ********* 2026-03-24 04:42:47.337169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-24 04:42:47.337190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-24 04:42:47.337204 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:42:47.337216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-24 04:42:47.337251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-24 04:43:12.341589 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:43:12.341713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-24 04:43:12.341733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-24 04:43:12.341746 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:43:12.341774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-24 04:43:12.341787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-24 04:43:12.341799 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-24 04:43:12.341812 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-24 04:43:12.341955 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:43:12.341987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-24 04:43:12.342120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-24 04:43:12.342145 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:43:12.342166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-24 04:43:12.342186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-24 04:43:12.342207 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:43:12.342227 | orchestrator | 2026-03-24 04:43:12.342261 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 04:43:12.342282 | orchestrator | Tuesday 24 March 2026 04:42:48 +0000 (0:00:01.773) 0:00:20.236 ********* 2026-03-24 04:43:12.342297 | orchestrator | 2026-03-24 04:43:12.342310 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 04:43:12.342322 | orchestrator | Tuesday 24 March 2026 04:42:48 +0000 (0:00:00.154) 0:00:20.391 ********* 2026-03-24 04:43:12.342335 | orchestrator | 2026-03-24 04:43:12.342348 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 04:43:12.342360 | orchestrator | Tuesday 24 March 2026 04:42:49 +0000 (0:00:00.143) 0:00:20.534 ********* 2026-03-24 04:43:12.342385 | orchestrator | 2026-03-24 04:43:12.342397 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 04:43:12.342409 | orchestrator | Tuesday 24 March 2026 04:42:49 +0000 (0:00:00.143) 0:00:20.677 ********* 2026-03-24 04:43:12.342421 | orchestrator | 2026-03-24 04:43:12.342434 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 04:43:12.342446 | orchestrator | Tuesday 24 March 2026 04:42:49 +0000 (0:00:00.340) 0:00:21.018 ********* 2026-03-24 04:43:12.342459 | orchestrator | 2026-03-24 04:43:12.342471 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-24 04:43:12.342484 | orchestrator | Tuesday 24 March 2026 04:42:49 +0000 (0:00:00.146) 0:00:21.164 ********* 2026-03-24 04:43:12.342495 | orchestrator | 2026-03-24 04:43:12.342508 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-24 04:43:12.342519 | orchestrator | Tuesday 24 March 2026 04:42:49 +0000 (0:00:00.143) 0:00:21.308 ********* 2026-03-24 04:43:12.342530 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:43:12.342541 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:43:12.342552 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:43:12.342563 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:43:12.342574 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:43:12.342584 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:43:12.342595 | orchestrator | 2026-03-24 04:43:12.342606 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-24 04:43:12.342617 | orchestrator | Tuesday 24 March 2026 04:43:01 +0000 (0:00:11.265) 0:00:32.573 ********* 2026-03-24 04:43:12.342628 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:43:12.342640 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:43:12.342651 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:43:12.342661 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:43:12.342672 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:43:12.342682 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:43:12.342693 | orchestrator | 2026-03-24 04:43:12.342704 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-24 04:43:12.342715 | orchestrator | Tuesday 24 March 2026 04:43:02 +0000 (0:00:01.165) 0:00:33.739 ********* 2026-03-24 04:43:12.342726 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:43:12.342747 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:43:25.475233 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:43:25.475341 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:43:25.475354 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:43:25.475364 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:43:25.475373 | orchestrator | 2026-03-24 04:43:25.475383 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-24 04:43:25.475394 | orchestrator | Tuesday 24 March 2026 04:43:12 +0000 (0:00:10.031) 0:00:43.770 ********* 2026-03-24 04:43:25.475403 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-24 04:43:25.475414 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-24 04:43:25.475422 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-24 04:43:25.475431 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-24 04:43:25.475440 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-24 04:43:25.475448 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-24 04:43:25.475457 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-24 04:43:25.475466 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-24 04:43:25.475495 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-24 04:43:25.475504 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-24 04:43:25.475513 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-24 04:43:25.475522 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-24 04:43:25.475530 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 04:43:25.475539 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 04:43:25.475547 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 04:43:25.475556 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 04:43:25.475577 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 04:43:25.475586 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-24 04:43:25.475595 | orchestrator | 2026-03-24 04:43:25.475604 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-24 04:43:25.475612 | orchestrator | Tuesday 24 March 2026 04:43:18 +0000 (0:00:06.437) 0:00:50.208 ********* 2026-03-24 04:43:25.475621 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-24 04:43:25.475631 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:43:25.475639 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-24 04:43:25.475648 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:43:25.475656 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-24 04:43:25.475665 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:43:25.475674 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-24 04:43:25.475682 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-24 04:43:25.475691 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-24 04:43:25.475699 | orchestrator | 2026-03-24 04:43:25.475708 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-24 04:43:25.475717 | orchestrator | Tuesday 24 March 2026 04:43:21 +0000 (0:00:02.286) 0:00:52.495 ********* 2026-03-24 04:43:25.475725 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-24 04:43:25.475734 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:43:25.475743 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-24 04:43:25.475751 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:43:25.475760 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-24 04:43:25.475769 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:43:25.475779 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-24 04:43:25.475788 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-24 04:43:25.475798 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-24 04:43:25.475808 | orchestrator | 2026-03-24 04:43:25.475819 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:43:25.475830 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 04:43:25.475869 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 04:43:25.475905 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-24 04:43:25.475934 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:43:25.475950 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:43:25.475966 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:43:25.475979 | orchestrator | 2026-03-24 04:43:25.475989 | orchestrator | 2026-03-24 04:43:25.475999 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:43:25.476009 | orchestrator | Tuesday 24 March 2026 04:43:25 +0000 (0:00:03.975) 0:00:56.470 ********* 2026-03-24 04:43:25.476019 | orchestrator | =============================================================================== 2026-03-24 04:43:25.476028 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.27s 2026-03-24 04:43:25.476038 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.03s 2026-03-24 04:43:25.476047 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.44s 2026-03-24 04:43:25.476055 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.98s 2026-03-24 04:43:25.476064 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.39s 2026-03-24 04:43:25.476072 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.30s 2026-03-24 04:43:25.476080 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.29s 2026-03-24 04:43:25.476089 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.77s 2026-03-24 04:43:25.476097 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.57s 2026-03-24 04:43:25.476106 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.56s 2026-03-24 04:43:25.476114 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.51s 2026-03-24 04:43:25.476123 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.46s 2026-03-24 04:43:25.476131 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.41s 2026-03-24 04:43:25.476140 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.40s 2026-03-24 04:43:25.476148 | orchestrator | module-load : Load modules ---------------------------------------------- 1.22s 2026-03-24 04:43:25.476156 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.17s 2026-03-24 04:43:25.476170 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.07s 2026-03-24 04:43:25.476179 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.96s 2026-03-24 04:43:25.476188 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.94s 2026-03-24 04:43:25.476200 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2026-03-24 04:43:25.785527 | orchestrator | + osism apply -a upgrade ovn 2026-03-24 04:43:27.847093 | orchestrator | 2026-03-24 04:43:27 | INFO  | Task 3757a5c3-7893-4231-a225-a65758a95a00 (ovn) was prepared for execution. 2026-03-24 04:43:27.847190 | orchestrator | 2026-03-24 04:43:27 | INFO  | It takes a moment until task 3757a5c3-7893-4231-a225-a65758a95a00 (ovn) has been started and output is visible here. 2026-03-24 04:43:48.787540 | orchestrator | 2026-03-24 04:43:48.787654 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-24 04:43:48.787671 | orchestrator | 2026-03-24 04:43:48.787684 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-24 04:43:48.787695 | orchestrator | Tuesday 24 March 2026 04:43:33 +0000 (0:00:01.631) 0:00:01.631 ********* 2026-03-24 04:43:48.787707 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:43:48.787719 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:43:48.787755 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:43:48.787767 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:43:48.787777 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:43:48.787788 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:43:48.787798 | orchestrator | 2026-03-24 04:43:48.787877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-24 04:43:48.787898 | orchestrator | Tuesday 24 March 2026 04:43:36 +0000 (0:00:02.444) 0:00:04.076 ********* 2026-03-24 04:43:48.787917 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-24 04:43:48.787936 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-24 04:43:48.787949 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-24 04:43:48.787960 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-24 04:43:48.787971 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-24 04:43:48.787982 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-24 04:43:48.787993 | orchestrator | 2026-03-24 04:43:48.788003 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-24 04:43:48.788014 | orchestrator | 2026-03-24 04:43:48.788025 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-24 04:43:48.788036 | orchestrator | Tuesday 24 March 2026 04:43:38 +0000 (0:00:02.405) 0:00:06.482 ********* 2026-03-24 04:43:48.788048 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:43:48.788060 | orchestrator | 2026-03-24 04:43:48.788073 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-24 04:43:48.788086 | orchestrator | Tuesday 24 March 2026 04:43:41 +0000 (0:00:03.037) 0:00:09.520 ********* 2026-03-24 04:43:48.788101 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788117 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788129 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788142 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788171 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788213 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788227 | orchestrator | 2026-03-24 04:43:48.788240 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-24 04:43:48.788252 | orchestrator | Tuesday 24 March 2026 04:43:43 +0000 (0:00:02.237) 0:00:11.758 ********* 2026-03-24 04:43:48.788265 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788278 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788291 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788304 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788328 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788339 | orchestrator | 2026-03-24 04:43:48.788351 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-24 04:43:48.788362 | orchestrator | Tuesday 24 March 2026 04:43:46 +0000 (0:00:02.715) 0:00:14.473 ********* 2026-03-24 04:43:48.788378 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788397 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:48.788418 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.619894 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620030 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620047 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620060 | orchestrator | 2026-03-24 04:43:56.620073 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-24 04:43:56.620086 | orchestrator | Tuesday 24 March 2026 04:43:48 +0000 (0:00:02.257) 0:00:16.731 ********* 2026-03-24 04:43:56.620097 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620109 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620120 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620172 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620185 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620217 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620237 | orchestrator | 2026-03-24 04:43:56.620302 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-24 04:43:56.620322 | orchestrator | Tuesday 24 March 2026 04:43:51 +0000 (0:00:03.061) 0:00:19.793 ********* 2026-03-24 04:43:56.620343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:43:56.620444 | orchestrator | 2026-03-24 04:43:56.620456 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-24 04:43:56.620470 | orchestrator | Tuesday 24 March 2026 04:43:54 +0000 (0:00:02.585) 0:00:22.378 ********* 2026-03-24 04:43:56.620489 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:43:56.620502 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:43:56.620513 | orchestrator | } 2026-03-24 04:43:56.620524 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:43:56.620534 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:43:56.620545 | orchestrator | } 2026-03-24 04:43:56.620556 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:43:56.620566 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:43:56.620577 | orchestrator | } 2026-03-24 04:43:56.620587 | orchestrator | changed: [testbed-node-3] => { 2026-03-24 04:43:56.620598 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:43:56.620608 | orchestrator | } 2026-03-24 04:43:56.620619 | orchestrator | changed: [testbed-node-4] => { 2026-03-24 04:43:56.620629 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:43:56.620640 | orchestrator | } 2026-03-24 04:43:56.620650 | orchestrator | changed: [testbed-node-5] => { 2026-03-24 04:43:56.620661 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:43:56.620671 | orchestrator | } 2026-03-24 04:43:56.620682 | orchestrator | 2026-03-24 04:43:56.620693 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:43:56.620704 | orchestrator | Tuesday 24 March 2026 04:43:56 +0000 (0:00:02.062) 0:00:24.440 ********* 2026-03-24 04:43:56.620726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:44:26.597608 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:44:26.597729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:44:26.597815 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:44:26.597838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:44:26.597869 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:44:26.597890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:44:26.597945 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:44:26.597966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:44:26.597978 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:44:26.597988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:44:26.598000 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:44:26.598011 | orchestrator | 2026-03-24 04:44:26.598090 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-24 04:44:26.598103 | orchestrator | Tuesday 24 March 2026 04:43:59 +0000 (0:00:02.517) 0:00:26.958 ********* 2026-03-24 04:44:26.598114 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:44:26.598126 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:44:26.598137 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:44:26.598148 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:44:26.598160 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:44:26.598172 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:44:26.598184 | orchestrator | 2026-03-24 04:44:26.598211 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-24 04:44:26.598225 | orchestrator | Tuesday 24 March 2026 04:44:02 +0000 (0:00:03.656) 0:00:30.615 ********* 2026-03-24 04:44:26.598238 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-24 04:44:26.598251 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-24 04:44:26.598269 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-24 04:44:26.598288 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-24 04:44:26.598303 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-24 04:44:26.598316 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-24 04:44:26.598329 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 04:44:26.598341 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 04:44:26.598353 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 04:44:26.598367 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 04:44:26.598378 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 04:44:26.598410 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-24 04:44:26.598423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-24 04:44:26.598449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-24 04:44:26.598461 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-24 04:44:26.598474 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-24 04:44:26.598486 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-24 04:44:26.598499 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-24 04:44:26.598512 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 04:44:26.598524 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 04:44:26.598537 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 04:44:26.598548 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 04:44:26.598559 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 04:44:26.598569 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-24 04:44:26.598580 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 04:44:26.598590 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 04:44:26.598601 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 04:44:26.598611 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 04:44:26.598622 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 04:44:26.598632 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-24 04:44:26.598643 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 04:44:26.598654 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 04:44:26.598665 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 04:44:26.598675 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 04:44:26.598686 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 04:44:26.598696 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-24 04:44:26.598707 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-24 04:44:26.598718 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-24 04:44:26.598734 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-24 04:44:26.598770 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-24 04:44:26.598782 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-24 04:44:26.598792 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-24 04:44:26.598804 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-24 04:44:26.598825 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-24 04:44:26.598836 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-24 04:44:26.598847 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-24 04:44:26.598858 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-24 04:44:26.598876 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-24 04:47:14.720872 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-24 04:47:14.721014 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-24 04:47:14.721042 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-24 04:47:14.721062 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-24 04:47:14.721081 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-24 04:47:14.721100 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-24 04:47:14.721120 | orchestrator | 2026-03-24 04:47:14.721141 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 04:47:14.721161 | orchestrator | Tuesday 24 March 2026 04:44:23 +0000 (0:00:20.867) 0:00:51.482 ********* 2026-03-24 04:47:14.721180 | orchestrator | 2026-03-24 04:47:14.721201 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 04:47:14.721220 | orchestrator | Tuesday 24 March 2026 04:44:23 +0000 (0:00:00.432) 0:00:51.915 ********* 2026-03-24 04:47:14.721238 | orchestrator | 2026-03-24 04:47:14.721256 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 04:47:14.721275 | orchestrator | Tuesday 24 March 2026 04:44:24 +0000 (0:00:00.418) 0:00:52.334 ********* 2026-03-24 04:47:14.721293 | orchestrator | 2026-03-24 04:47:14.721312 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 04:47:14.721331 | orchestrator | Tuesday 24 March 2026 04:44:24 +0000 (0:00:00.459) 0:00:52.794 ********* 2026-03-24 04:47:14.721349 | orchestrator | 2026-03-24 04:47:14.721368 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 04:47:14.721388 | orchestrator | Tuesday 24 March 2026 04:44:25 +0000 (0:00:00.487) 0:00:53.281 ********* 2026-03-24 04:47:14.721407 | orchestrator | 2026-03-24 04:47:14.721427 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-24 04:47:14.721447 | orchestrator | Tuesday 24 March 2026 04:44:25 +0000 (0:00:00.457) 0:00:53.739 ********* 2026-03-24 04:47:14.721467 | orchestrator | 2026-03-24 04:47:14.721487 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-24 04:47:14.721536 | orchestrator | Tuesday 24 March 2026 04:44:26 +0000 (0:00:00.771) 0:00:54.511 ********* 2026-03-24 04:47:14.721556 | orchestrator | 2026-03-24 04:47:14.721575 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-03-24 04:47:14.721595 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:47:14.721617 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:47:14.721637 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:47:14.721657 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:47:14.721677 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:47:14.721696 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:47:14.721749 | orchestrator | 2026-03-24 04:47:14.721769 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-24 04:47:14.721788 | orchestrator | 2026-03-24 04:47:14.721806 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-24 04:47:14.721824 | orchestrator | Tuesday 24 March 2026 04:46:38 +0000 (0:02:11.869) 0:03:06.380 ********* 2026-03-24 04:47:14.721843 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:47:14.721861 | orchestrator | 2026-03-24 04:47:14.721881 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-24 04:47:14.721899 | orchestrator | Tuesday 24 March 2026 04:46:40 +0000 (0:00:01.899) 0:03:08.279 ********* 2026-03-24 04:47:14.721919 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-24 04:47:14.721937 | orchestrator | 2026-03-24 04:47:14.721974 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-24 04:47:14.721993 | orchestrator | Tuesday 24 March 2026 04:46:42 +0000 (0:00:01.898) 0:03:10.177 ********* 2026-03-24 04:47:14.722011 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.722110 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.722128 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.722191 | orchestrator | 2026-03-24 04:47:14.722209 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-24 04:47:14.722227 | orchestrator | Tuesday 24 March 2026 04:46:44 +0000 (0:00:01.974) 0:03:12.152 ********* 2026-03-24 04:47:14.722244 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.722262 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.722282 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.722300 | orchestrator | 2026-03-24 04:47:14.722317 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-24 04:47:14.722335 | orchestrator | Tuesday 24 March 2026 04:46:45 +0000 (0:00:01.537) 0:03:13.689 ********* 2026-03-24 04:47:14.722353 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.722370 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.722389 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.722406 | orchestrator | 2026-03-24 04:47:14.722424 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-24 04:47:14.722441 | orchestrator | Tuesday 24 March 2026 04:46:47 +0000 (0:00:01.411) 0:03:15.101 ********* 2026-03-24 04:47:14.722459 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.722477 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.722494 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.722542 | orchestrator | 2026-03-24 04:47:14.722560 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-24 04:47:14.722579 | orchestrator | Tuesday 24 March 2026 04:46:48 +0000 (0:00:01.631) 0:03:16.732 ********* 2026-03-24 04:47:14.722598 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.722643 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.722661 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.722680 | orchestrator | 2026-03-24 04:47:14.722700 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-24 04:47:14.722718 | orchestrator | Tuesday 24 March 2026 04:46:50 +0000 (0:00:01.329) 0:03:18.062 ********* 2026-03-24 04:47:14.722736 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:47:14.722754 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:47:14.722772 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:47:14.722789 | orchestrator | 2026-03-24 04:47:14.722809 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-24 04:47:14.722828 | orchestrator | Tuesday 24 March 2026 04:46:51 +0000 (0:00:01.398) 0:03:19.460 ********* 2026-03-24 04:47:14.722846 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.722865 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.722884 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.722902 | orchestrator | 2026-03-24 04:47:14.722920 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-24 04:47:14.722958 | orchestrator | Tuesday 24 March 2026 04:46:53 +0000 (0:00:01.751) 0:03:21.212 ********* 2026-03-24 04:47:14.722977 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.722996 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.723015 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.723034 | orchestrator | 2026-03-24 04:47:14.723052 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-24 04:47:14.723071 | orchestrator | Tuesday 24 March 2026 04:46:54 +0000 (0:00:01.553) 0:03:22.765 ********* 2026-03-24 04:47:14.723090 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.723108 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.723126 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.723145 | orchestrator | 2026-03-24 04:47:14.723163 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-24 04:47:14.723182 | orchestrator | Tuesday 24 March 2026 04:46:56 +0000 (0:00:01.940) 0:03:24.706 ********* 2026-03-24 04:47:14.723200 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.723219 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.723237 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.723255 | orchestrator | 2026-03-24 04:47:14.723274 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-24 04:47:14.723293 | orchestrator | Tuesday 24 March 2026 04:46:58 +0000 (0:00:01.396) 0:03:26.103 ********* 2026-03-24 04:47:14.723312 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:47:14.723331 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:47:14.723349 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:47:14.723368 | orchestrator | 2026-03-24 04:47:14.723386 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-24 04:47:14.723405 | orchestrator | Tuesday 24 March 2026 04:46:59 +0000 (0:00:01.426) 0:03:27.530 ********* 2026-03-24 04:47:14.723423 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:47:14.723442 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:47:14.723461 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:47:14.723479 | orchestrator | 2026-03-24 04:47:14.723577 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-24 04:47:14.723599 | orchestrator | Tuesday 24 March 2026 04:47:00 +0000 (0:00:01.410) 0:03:28.940 ********* 2026-03-24 04:47:14.723618 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.723636 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.723656 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.723674 | orchestrator | 2026-03-24 04:47:14.723691 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-24 04:47:14.723708 | orchestrator | Tuesday 24 March 2026 04:47:02 +0000 (0:00:01.774) 0:03:30.715 ********* 2026-03-24 04:47:14.723725 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.723741 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.723757 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.723774 | orchestrator | 2026-03-24 04:47:14.723791 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-24 04:47:14.723808 | orchestrator | Tuesday 24 March 2026 04:47:04 +0000 (0:00:01.363) 0:03:32.078 ********* 2026-03-24 04:47:14.723825 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.723841 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.723858 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.723874 | orchestrator | 2026-03-24 04:47:14.723890 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-24 04:47:14.723917 | orchestrator | Tuesday 24 March 2026 04:47:06 +0000 (0:00:02.040) 0:03:34.119 ********* 2026-03-24 04:47:14.723934 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:47:14.723950 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:47:14.723967 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:47:14.723983 | orchestrator | 2026-03-24 04:47:14.724000 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-24 04:47:14.724016 | orchestrator | Tuesday 24 March 2026 04:47:07 +0000 (0:00:01.383) 0:03:35.503 ********* 2026-03-24 04:47:14.724043 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:47:14.724059 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:47:14.724076 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:47:14.724093 | orchestrator | 2026-03-24 04:47:14.724109 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-24 04:47:14.724126 | orchestrator | Tuesday 24 March 2026 04:47:08 +0000 (0:00:01.401) 0:03:36.904 ********* 2026-03-24 04:47:14.724143 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:47:14.724159 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:47:14.724176 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:47:14.724192 | orchestrator | 2026-03-24 04:47:14.724209 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-24 04:47:14.724225 | orchestrator | Tuesday 24 March 2026 04:47:10 +0000 (0:00:01.669) 0:03:38.574 ********* 2026-03-24 04:47:14.724258 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777157 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777287 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777328 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777361 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777400 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:20.777453 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:20.777485 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:20.777585 | orchestrator | 2026-03-24 04:47:20.777602 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-24 04:47:20.777618 | orchestrator | Tuesday 24 March 2026 04:47:14 +0000 (0:00:04.092) 0:03:42.666 ********* 2026-03-24 04:47:20.777634 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777670 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:20.777731 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962460 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:34.962664 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962681 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:34.962693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:34.962698 | orchestrator | 2026-03-24 04:47:34.962704 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-24 04:47:34.962710 | orchestrator | Tuesday 24 March 2026 04:47:20 +0000 (0:00:06.060) 0:03:48.726 ********* 2026-03-24 04:47:34.962715 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-24 04:47:34.962720 | orchestrator | 2026-03-24 04:47:34.962725 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-24 04:47:34.962730 | orchestrator | Tuesday 24 March 2026 04:47:22 +0000 (0:00:01.723) 0:03:50.449 ********* 2026-03-24 04:47:34.962735 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:47:34.962741 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:47:34.962756 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:47:34.962761 | orchestrator | 2026-03-24 04:47:34.962765 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-24 04:47:34.962770 | orchestrator | Tuesday 24 March 2026 04:47:24 +0000 (0:00:01.666) 0:03:52.116 ********* 2026-03-24 04:47:34.962775 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:47:34.962780 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:47:34.962784 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:47:34.962789 | orchestrator | 2026-03-24 04:47:34.962793 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-24 04:47:34.962798 | orchestrator | Tuesday 24 March 2026 04:47:26 +0000 (0:00:02.585) 0:03:54.702 ********* 2026-03-24 04:47:34.962802 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:47:34.962807 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:47:34.962812 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:47:34.962816 | orchestrator | 2026-03-24 04:47:34.962821 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-24 04:47:34.962830 | orchestrator | Tuesday 24 March 2026 04:47:29 +0000 (0:00:02.821) 0:03:57.524 ********* 2026-03-24 04:47:34.962836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:34.962874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:39.518251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:39.518329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:47:39.518346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518356 | orchestrator | 2026-03-24 04:47:39.518361 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-24 04:47:39.518366 | orchestrator | Tuesday 24 March 2026 04:47:34 +0000 (0:00:05.367) 0:04:02.891 ********* 2026-03-24 04:47:39.518371 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:47:39.518376 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:47:39.518380 | orchestrator | } 2026-03-24 04:47:39.518384 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:47:39.518388 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:47:39.518392 | orchestrator | } 2026-03-24 04:47:39.518395 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:47:39.518399 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:47:39.518403 | orchestrator | } 2026-03-24 04:47:39.518407 | orchestrator | 2026-03-24 04:47:39.518411 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-24 04:47:39.518414 | orchestrator | Tuesday 24 March 2026 04:47:36 +0000 (0:00:01.404) 0:04:04.296 ********* 2026-03-24 04:47:39.518419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-24 04:47:39.518524 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-24 04:49:09.991538 | orchestrator | 2026-03-24 04:49:09.991659 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-24 04:49:09.991685 | orchestrator | Tuesday 24 March 2026 04:47:39 +0000 (0:00:03.162) 0:04:07.459 ********* 2026-03-24 04:49:09.991705 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-24 04:49:09.991722 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-24 04:49:09.991742 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-24 04:49:09.991763 | orchestrator | 2026-03-24 04:49:09.991816 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-24 04:49:09.991836 | orchestrator | Tuesday 24 March 2026 04:47:41 +0000 (0:00:02.175) 0:04:09.634 ********* 2026-03-24 04:49:09.991854 | orchestrator | changed: [testbed-node-0] => { 2026-03-24 04:49:09.991873 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:49:09.991891 | orchestrator | } 2026-03-24 04:49:09.991910 | orchestrator | changed: [testbed-node-1] => { 2026-03-24 04:49:09.991928 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:49:09.991946 | orchestrator | } 2026-03-24 04:49:09.991964 | orchestrator | changed: [testbed-node-2] => { 2026-03-24 04:49:09.991982 | orchestrator |  "msg": "Notifying handlers" 2026-03-24 04:49:09.992000 | orchestrator | } 2026-03-24 04:49:09.992018 | orchestrator | 2026-03-24 04:49:09.992040 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 04:49:09.992062 | orchestrator | Tuesday 24 March 2026 04:47:43 +0000 (0:00:01.395) 0:04:11.030 ********* 2026-03-24 04:49:09.992084 | orchestrator | 2026-03-24 04:49:09.992104 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 04:49:09.992126 | orchestrator | Tuesday 24 March 2026 04:47:43 +0000 (0:00:00.439) 0:04:11.469 ********* 2026-03-24 04:49:09.992147 | orchestrator | 2026-03-24 04:49:09.992167 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-24 04:49:09.992208 | orchestrator | Tuesday 24 March 2026 04:47:43 +0000 (0:00:00.464) 0:04:11.933 ********* 2026-03-24 04:49:09.992228 | orchestrator | 2026-03-24 04:49:09.992248 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-24 04:49:09.992270 | orchestrator | Tuesday 24 March 2026 04:47:45 +0000 (0:00:01.039) 0:04:12.973 ********* 2026-03-24 04:49:09.992292 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:49:09.992313 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:49:09.992331 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:49:09.992350 | orchestrator | 2026-03-24 04:49:09.992397 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-24 04:49:09.992417 | orchestrator | Tuesday 24 March 2026 04:48:01 +0000 (0:00:16.170) 0:04:29.143 ********* 2026-03-24 04:49:09.992471 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:49:09.992490 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:49:09.992508 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:49:09.992526 | orchestrator | 2026-03-24 04:49:09.992541 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-24 04:49:09.992558 | orchestrator | Tuesday 24 March 2026 04:48:17 +0000 (0:00:15.992) 0:04:45.136 ********* 2026-03-24 04:49:09.992575 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-24 04:49:09.992592 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-24 04:49:09.992609 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-24 04:49:09.992626 | orchestrator | 2026-03-24 04:49:09.992643 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-24 04:49:09.992660 | orchestrator | Tuesday 24 March 2026 04:48:33 +0000 (0:00:16.048) 0:05:01.184 ********* 2026-03-24 04:49:09.992677 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:49:09.992693 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:49:09.992710 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:49:09.992727 | orchestrator | 2026-03-24 04:49:09.992745 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-24 04:49:09.992763 | orchestrator | Tuesday 24 March 2026 04:48:49 +0000 (0:00:16.491) 0:05:17.675 ********* 2026-03-24 04:49:09.992780 | orchestrator | Pausing for 5 seconds 2026-03-24 04:49:09.992798 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:49:09.992814 | orchestrator | 2026-03-24 04:49:09.992831 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-24 04:49:09.992848 | orchestrator | Tuesday 24 March 2026 04:48:55 +0000 (0:00:06.242) 0:05:23.917 ********* 2026-03-24 04:49:09.992865 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:49:09.992882 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:49:09.992899 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:49:09.992916 | orchestrator | 2026-03-24 04:49:09.992933 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-24 04:49:09.992953 | orchestrator | Tuesday 24 March 2026 04:48:57 +0000 (0:00:01.878) 0:05:25.796 ********* 2026-03-24 04:49:09.992970 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:49:09.992987 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:49:09.993004 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:49:09.993020 | orchestrator | 2026-03-24 04:49:09.993038 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-24 04:49:09.993056 | orchestrator | Tuesday 24 March 2026 04:48:59 +0000 (0:00:01.624) 0:05:27.421 ********* 2026-03-24 04:49:09.993074 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:49:09.993092 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:49:09.993111 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:49:09.993130 | orchestrator | 2026-03-24 04:49:09.993147 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-24 04:49:09.993165 | orchestrator | Tuesday 24 March 2026 04:49:01 +0000 (0:00:01.787) 0:05:29.209 ********* 2026-03-24 04:49:09.993184 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:49:09.993202 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:49:09.993219 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:49:09.993230 | orchestrator | 2026-03-24 04:49:09.993241 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-24 04:49:09.993252 | orchestrator | Tuesday 24 March 2026 04:49:02 +0000 (0:00:01.672) 0:05:30.882 ********* 2026-03-24 04:49:09.993262 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:49:09.993273 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:49:09.993283 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:49:09.993294 | orchestrator | 2026-03-24 04:49:09.993305 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-24 04:49:09.993340 | orchestrator | Tuesday 24 March 2026 04:49:04 +0000 (0:00:01.840) 0:05:32.722 ********* 2026-03-24 04:49:09.993352 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:49:09.993363 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:49:09.993451 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:49:09.993463 | orchestrator | 2026-03-24 04:49:09.993474 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-24 04:49:09.993484 | orchestrator | Tuesday 24 March 2026 04:49:06 +0000 (0:00:01.923) 0:05:34.646 ********* 2026-03-24 04:49:09.993495 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-24 04:49:09.993506 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-24 04:49:09.993517 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-24 04:49:09.993527 | orchestrator | 2026-03-24 04:49:09.993538 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 04:49:09.993550 | orchestrator | testbed-node-0 : ok=50  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-24 04:49:09.993563 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-24 04:49:09.993574 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-24 04:49:09.993585 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:49:09.993606 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:49:09.993617 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 04:49:09.993628 | orchestrator | 2026-03-24 04:49:09.993639 | orchestrator | 2026-03-24 04:49:09.993650 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 04:49:09.993660 | orchestrator | Tuesday 24 March 2026 04:49:09 +0000 (0:00:02.908) 0:05:37.554 ********* 2026-03-24 04:49:09.993671 | orchestrator | =============================================================================== 2026-03-24 04:49:09.993682 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.87s 2026-03-24 04:49:09.993693 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.87s 2026-03-24 04:49:09.993703 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.49s 2026-03-24 04:49:09.993714 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.17s 2026-03-24 04:49:09.993724 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.05s 2026-03-24 04:49:09.993735 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.99s 2026-03-24 04:49:09.993746 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.24s 2026-03-24 04:49:09.993756 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.06s 2026-03-24 04:49:09.993767 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.37s 2026-03-24 04:49:09.993778 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.09s 2026-03-24 04:49:09.993788 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.66s 2026-03-24 04:49:09.993799 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.16s 2026-03-24 04:49:09.993809 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.06s 2026-03-24 04:49:09.993820 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.04s 2026-03-24 04:49:09.993831 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.03s 2026-03-24 04:49:09.993841 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.91s 2026-03-24 04:49:09.993852 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.82s 2026-03-24 04:49:09.993873 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.72s 2026-03-24 04:49:09.993883 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.59s 2026-03-24 04:49:09.993895 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.58s 2026-03-24 04:49:10.355847 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-24 04:49:10.355972 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-24 04:49:10.356000 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-24 04:49:10.362128 | orchestrator | + set -e 2026-03-24 04:49:10.362229 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 04:49:10.362249 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 04:49:10.362264 | orchestrator | ++ INTERACTIVE=false 2026-03-24 04:49:10.362275 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 04:49:10.362288 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 04:49:10.362308 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-24 04:49:12.409781 | orchestrator | 2026-03-24 04:49:12 | INFO  | Task 46d03020-845b-423e-9b08-586dbc5ade18 (ceph-rolling_update) was prepared for execution. 2026-03-24 04:49:12.409901 | orchestrator | 2026-03-24 04:49:12 | INFO  | It takes a moment until task 46d03020-845b-423e-9b08-586dbc5ade18 (ceph-rolling_update) has been started and output is visible here. 2026-03-24 04:50:33.991174 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-24 04:50:33.991389 | orchestrator | 2.16.14 2026-03-24 04:50:33.991425 | orchestrator | 2026-03-24 04:50:33.991446 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-24 04:50:33.991467 | orchestrator | 2026-03-24 04:50:33.991487 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-24 04:50:33.991506 | orchestrator | Tuesday 24 March 2026 04:49:20 +0000 (0:00:01.619) 0:00:01.619 ********* 2026-03-24 04:50:33.991525 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-24 04:50:33.991538 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-24 04:50:33.991549 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-24 04:50:33.991560 | orchestrator | skipping: [localhost] 2026-03-24 04:50:33.991572 | orchestrator | 2026-03-24 04:50:33.991583 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-24 04:50:33.991594 | orchestrator | 2026-03-24 04:50:33.991605 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-24 04:50:33.991616 | orchestrator | Tuesday 24 March 2026 04:49:22 +0000 (0:00:02.046) 0:00:03.666 ********* 2026-03-24 04:50:33.991627 | orchestrator | ok: [testbed-node-0] => { 2026-03-24 04:50:33.991638 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-24 04:50:33.991649 | orchestrator | } 2026-03-24 04:50:33.991660 | orchestrator | ok: [testbed-node-1] => { 2026-03-24 04:50:33.991671 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-24 04:50:33.991682 | orchestrator | } 2026-03-24 04:50:33.991693 | orchestrator | ok: [testbed-node-2] => { 2026-03-24 04:50:33.991704 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-24 04:50:33.991716 | orchestrator | } 2026-03-24 04:50:33.991728 | orchestrator | ok: [testbed-node-3] => { 2026-03-24 04:50:33.991756 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-24 04:50:33.991769 | orchestrator | } 2026-03-24 04:50:33.991782 | orchestrator | ok: [testbed-node-4] => { 2026-03-24 04:50:33.991794 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-24 04:50:33.991806 | orchestrator | } 2026-03-24 04:50:33.991818 | orchestrator | ok: [testbed-node-5] => { 2026-03-24 04:50:33.991831 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-24 04:50:33.991843 | orchestrator | } 2026-03-24 04:50:33.991855 | orchestrator | ok: [testbed-manager] => { 2026-03-24 04:50:33.991868 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-24 04:50:33.991903 | orchestrator | } 2026-03-24 04:50:33.991915 | orchestrator | 2026-03-24 04:50:33.991928 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-24 04:50:33.991941 | orchestrator | Tuesday 24 March 2026 04:49:27 +0000 (0:00:04.834) 0:00:08.500 ********* 2026-03-24 04:50:33.991953 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:33.991966 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:50:33.991978 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:50:33.991991 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:50:33.992004 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:50:33.992016 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:50:33.992029 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.992041 | orchestrator | 2026-03-24 04:50:33.992054 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-24 04:50:33.992067 | orchestrator | Tuesday 24 March 2026 04:49:32 +0000 (0:00:04.919) 0:00:13.419 ********* 2026-03-24 04:50:33.992079 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:50:33.992092 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:50:33.992105 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:50:33.992117 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 04:50:33.992127 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 04:50:33.992138 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 04:50:33.992148 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 04:50:33.992159 | orchestrator | 2026-03-24 04:50:33.992170 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-24 04:50:33.992180 | orchestrator | Tuesday 24 March 2026 04:50:03 +0000 (0:00:30.992) 0:00:44.412 ********* 2026-03-24 04:50:33.992191 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.992202 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.992213 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.992223 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.992234 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.992244 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.992255 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.992266 | orchestrator | 2026-03-24 04:50:33.992277 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 04:50:33.992309 | orchestrator | Tuesday 24 March 2026 04:50:05 +0000 (0:00:02.093) 0:00:46.506 ********* 2026-03-24 04:50:33.992321 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-24 04:50:33.992333 | orchestrator | 2026-03-24 04:50:33.992344 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 04:50:33.992355 | orchestrator | Tuesday 24 March 2026 04:50:08 +0000 (0:00:02.647) 0:00:49.153 ********* 2026-03-24 04:50:33.992366 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.992377 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.992387 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.992399 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.992409 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.992420 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.992436 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.992453 | orchestrator | 2026-03-24 04:50:33.992496 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 04:50:33.992517 | orchestrator | Tuesday 24 March 2026 04:50:10 +0000 (0:00:02.669) 0:00:51.823 ********* 2026-03-24 04:50:33.992535 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.992546 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.992557 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.992579 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.992590 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.992600 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.992611 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.992621 | orchestrator | 2026-03-24 04:50:33.992632 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 04:50:33.992643 | orchestrator | Tuesday 24 March 2026 04:50:12 +0000 (0:00:01.954) 0:00:53.778 ********* 2026-03-24 04:50:33.992654 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.992664 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.992675 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.992685 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.992696 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.992706 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.992717 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.992727 | orchestrator | 2026-03-24 04:50:33.992738 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 04:50:33.992748 | orchestrator | Tuesday 24 March 2026 04:50:15 +0000 (0:00:02.518) 0:00:56.297 ********* 2026-03-24 04:50:33.992759 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.992770 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.992780 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.992791 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.992801 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.992812 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.992822 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.992833 | orchestrator | 2026-03-24 04:50:33.992844 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 04:50:33.992854 | orchestrator | Tuesday 24 March 2026 04:50:17 +0000 (0:00:01.929) 0:00:58.226 ********* 2026-03-24 04:50:33.992865 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.992883 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.992894 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.992904 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.992915 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.992925 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.992936 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.992947 | orchestrator | 2026-03-24 04:50:33.992957 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 04:50:33.992971 | orchestrator | Tuesday 24 March 2026 04:50:19 +0000 (0:00:02.207) 0:01:00.434 ********* 2026-03-24 04:50:33.992990 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.993009 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.993022 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.993032 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.993043 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.993059 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.993077 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.993104 | orchestrator | 2026-03-24 04:50:33.993122 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 04:50:33.993139 | orchestrator | Tuesday 24 March 2026 04:50:21 +0000 (0:00:02.003) 0:01:02.437 ********* 2026-03-24 04:50:33.993156 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:33.993173 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:50:33.993190 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:50:33.993207 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:50:33.993225 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:50:33.993243 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:50:33.993261 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:50:33.993279 | orchestrator | 2026-03-24 04:50:33.993338 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 04:50:33.993350 | orchestrator | Tuesday 24 March 2026 04:50:23 +0000 (0:00:02.091) 0:01:04.529 ********* 2026-03-24 04:50:33.993361 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.993372 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.993393 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.993404 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.993414 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.993425 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.993436 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.993446 | orchestrator | 2026-03-24 04:50:33.993457 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 04:50:33.993468 | orchestrator | Tuesday 24 March 2026 04:50:25 +0000 (0:00:01.895) 0:01:06.425 ********* 2026-03-24 04:50:33.993479 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:50:33.993490 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:50:33.993500 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:50:33.993511 | orchestrator | 2026-03-24 04:50:33.993522 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 04:50:33.993532 | orchestrator | Tuesday 24 March 2026 04:50:27 +0000 (0:00:01.632) 0:01:08.057 ********* 2026-03-24 04:50:33.993543 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:33.993554 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:33.993565 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:33.993575 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:33.993586 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:33.993596 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:33.993607 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:33.993618 | orchestrator | 2026-03-24 04:50:33.993628 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 04:50:33.993639 | orchestrator | Tuesday 24 March 2026 04:50:29 +0000 (0:00:02.075) 0:01:10.133 ********* 2026-03-24 04:50:33.993650 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:50:33.993661 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:50:33.993672 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:50:33.993683 | orchestrator | 2026-03-24 04:50:33.993694 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 04:50:33.993705 | orchestrator | Tuesday 24 March 2026 04:50:32 +0000 (0:00:03.350) 0:01:13.483 ********* 2026-03-24 04:50:33.993727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 04:50:55.775684 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 04:50:55.775804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 04:50:55.775820 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.775833 | orchestrator | 2026-03-24 04:50:55.775846 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 04:50:55.775858 | orchestrator | Tuesday 24 March 2026 04:50:33 +0000 (0:00:01.396) 0:01:14.879 ********* 2026-03-24 04:50:55.775872 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 04:50:55.775886 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 04:50:55.775897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 04:50:55.775908 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.775919 | orchestrator | 2026-03-24 04:50:55.775931 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 04:50:55.775942 | orchestrator | Tuesday 24 March 2026 04:50:35 +0000 (0:00:01.851) 0:01:16.731 ********* 2026-03-24 04:50:55.775980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 04:50:55.775995 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 04:50:55.776065 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 04:50:55.776080 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.776091 | orchestrator | 2026-03-24 04:50:55.776102 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 04:50:55.776113 | orchestrator | Tuesday 24 March 2026 04:50:36 +0000 (0:00:01.144) 0:01:17.875 ********* 2026-03-24 04:50:55.776126 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cefde431640e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 04:50:29.949753', 'end': '2026-03-24 04:50:29.993703', 'delta': '0:00:00.043950', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cefde431640e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 04:50:55.776161 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4f8b0ade79f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 04:50:30.835773', 'end': '2026-03-24 04:50:30.873628', 'delta': '0:00:00.037855', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f8b0ade79f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 04:50:55.776174 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cce21668b5d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 04:50:31.395917', 'end': '2026-03-24 04:50:31.448172', 'delta': '0:00:00.052255', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cce21668b5d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 04:50:55.776186 | orchestrator | 2026-03-24 04:50:55.776197 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 04:50:55.776217 | orchestrator | Tuesday 24 March 2026 04:50:38 +0000 (0:00:01.174) 0:01:19.049 ********* 2026-03-24 04:50:55.776230 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:55.776244 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:55.776257 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:55.776295 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:55.776308 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:55.776320 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:55.776339 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:55.776352 | orchestrator | 2026-03-24 04:50:55.776364 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 04:50:55.776376 | orchestrator | Tuesday 24 March 2026 04:50:40 +0000 (0:00:02.108) 0:01:21.158 ********* 2026-03-24 04:50:55.776390 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.776403 | orchestrator | 2026-03-24 04:50:55.776415 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 04:50:55.776428 | orchestrator | Tuesday 24 March 2026 04:50:41 +0000 (0:00:01.250) 0:01:22.408 ********* 2026-03-24 04:50:55.776440 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:55.776452 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:55.776464 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:55.776476 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:55.776488 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:55.776501 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:55.776513 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:55.776525 | orchestrator | 2026-03-24 04:50:55.776537 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 04:50:55.776549 | orchestrator | Tuesday 24 March 2026 04:50:43 +0000 (0:00:02.063) 0:01:24.472 ********* 2026-03-24 04:50:55.776561 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:55.776574 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-24 04:50:55.776587 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 04:50:55.776597 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 04:50:55.776608 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-24 04:50:55.776629 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 04:50:55.776649 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-24 04:50:55.776667 | orchestrator | 2026-03-24 04:50:55.776686 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 04:50:55.776705 | orchestrator | Tuesday 24 March 2026 04:50:47 +0000 (0:00:03.520) 0:01:27.993 ********* 2026-03-24 04:50:55.776722 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:50:55.776741 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:50:55.776757 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:50:55.776775 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:50:55.776793 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:50:55.776812 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:50:55.776831 | orchestrator | ok: [testbed-manager] 2026-03-24 04:50:55.776848 | orchestrator | 2026-03-24 04:50:55.776866 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 04:50:55.776883 | orchestrator | Tuesday 24 March 2026 04:50:49 +0000 (0:00:02.184) 0:01:30.177 ********* 2026-03-24 04:50:55.776901 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.776919 | orchestrator | 2026-03-24 04:50:55.776939 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 04:50:55.776958 | orchestrator | Tuesday 24 March 2026 04:50:50 +0000 (0:00:01.123) 0:01:31.301 ********* 2026-03-24 04:50:55.776978 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.776997 | orchestrator | 2026-03-24 04:50:55.777017 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 04:50:55.777036 | orchestrator | Tuesday 24 March 2026 04:50:51 +0000 (0:00:01.205) 0:01:32.506 ********* 2026-03-24 04:50:55.777054 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.777077 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:50:55.777088 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:50:55.777099 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:50:55.777111 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:50:55.777121 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:50:55.777132 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:50:55.777143 | orchestrator | 2026-03-24 04:50:55.777159 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 04:50:55.777178 | orchestrator | Tuesday 24 March 2026 04:50:53 +0000 (0:00:02.223) 0:01:34.730 ********* 2026-03-24 04:50:55.777196 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:50:55.777214 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:50:55.777233 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:50:55.777253 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:50:55.777335 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:50:55.777348 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:50:55.777372 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:06.026318 | orchestrator | 2026-03-24 04:51:06.026422 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 04:51:06.026434 | orchestrator | Tuesday 24 March 2026 04:50:55 +0000 (0:00:01.933) 0:01:36.663 ********* 2026-03-24 04:51:06.026441 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:06.026449 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:06.026455 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:06.026518 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:06.026534 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:06.026541 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:06.026548 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:06.026554 | orchestrator | 2026-03-24 04:51:06.026561 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 04:51:06.026568 | orchestrator | Tuesday 24 March 2026 04:50:57 +0000 (0:00:02.031) 0:01:38.695 ********* 2026-03-24 04:51:06.026575 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:06.026582 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:06.026588 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:06.026594 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:06.026601 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:06.026607 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:06.026613 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:06.026620 | orchestrator | 2026-03-24 04:51:06.026626 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 04:51:06.026632 | orchestrator | Tuesday 24 March 2026 04:50:59 +0000 (0:00:01.864) 0:01:40.560 ********* 2026-03-24 04:51:06.026639 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:06.026645 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:06.026652 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:06.026658 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:06.026664 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:06.026670 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:06.026690 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:06.026701 | orchestrator | 2026-03-24 04:51:06.026712 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 04:51:06.026723 | orchestrator | Tuesday 24 March 2026 04:51:01 +0000 (0:00:02.105) 0:01:42.666 ********* 2026-03-24 04:51:06.026733 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:06.026743 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:06.026754 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:06.026764 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:06.026774 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:06.026785 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:06.026795 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:06.026806 | orchestrator | 2026-03-24 04:51:06.026817 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 04:51:06.026843 | orchestrator | Tuesday 24 March 2026 04:51:03 +0000 (0:00:01.900) 0:01:44.567 ********* 2026-03-24 04:51:06.026851 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:06.026858 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:06.026865 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:06.026873 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:06.026880 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:06.026888 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:06.026895 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:06.026902 | orchestrator | 2026-03-24 04:51:06.026909 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 04:51:06.026916 | orchestrator | Tuesday 24 March 2026 04:51:05 +0000 (0:00:02.063) 0:01:46.630 ********* 2026-03-24 04:51:06.026926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.026937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.026944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.026968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:51:06.026978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.026986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.026998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.027015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.027025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.027040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.194782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.194884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.194971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.194988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:51:06.195003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6bbbff7c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.195093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195117 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:06.195130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.195164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:51:06.195185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.474791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.474906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.474922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4fc154b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.474932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.474939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.474946 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:06.474968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.474986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}})  2026-03-24 04:51:06.474994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.475002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}})  2026-03-24 04:51:06.475009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.475016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.475023 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:06.475030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:51:06.475043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}})  2026-03-24 04:51:06.524895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}})  2026-03-24 04:51:06.524902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.524958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.524983 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}})  2026-03-24 04:51:06.524999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.704923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}})  2026-03-24 04:51:06.705027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.705046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.705061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:51:06.705074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.705085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 04:51:06.705119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.705132 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:06.705164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}})  2026-03-24 04:51:06.705185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}})  2026-03-24 04:51:06.705197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.705213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.705241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}})  2026-03-24 04:51:06.910443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.910467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}})  2026-03-24 04:51:06.910475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:51:06.910512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.910531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}})  2026-03-24 04:51:06.910543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}})  2026-03-24 04:51:06.910550 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:06.910563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:06.915569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915608 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:06.915620 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915667 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915679 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-36-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:51:06.915690 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915702 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915720 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:06.915753 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10408dfc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:51:08.188464 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:08.188578 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:51:08.188596 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:08.188610 | orchestrator | 2026-03-24 04:51:08.188623 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 04:51:08.188664 | orchestrator | Tuesday 24 March 2026 04:51:08 +0000 (0:00:02.275) 0:01:48.905 ********* 2026-03-24 04:51:08.188678 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188693 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188705 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188718 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188765 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188798 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188819 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.188841 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.340884 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.340979 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:08.340989 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.340996 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341002 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341009 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341027 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341045 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341055 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341065 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6bbbff7c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341075 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.341086 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642426 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:08.642554 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642576 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642588 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642601 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642630 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642642 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642699 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642717 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4fc154b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642736 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642755 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.642768 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:08.642787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.764995 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.765375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.872827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.949858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.949955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.949988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950098 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950142 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950191 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.950225 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987580 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987600 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:08.987617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087465 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087550 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087615 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:09.087626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087638 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:09.087647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087672 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087729 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087738 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:09.087753 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995577 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-36-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995698 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995707 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995716 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995724 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995751 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:12.995785 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10408dfc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part16', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part14', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part15', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part1', 'scsi-SQEMU_QEMU_HARDDISK_10408dfc-d3b8-4f62-9e98-aca56513cc7c-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995796 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995804 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:51:12.995811 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:12.995819 | orchestrator | 2026-03-24 04:51:12.995827 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 04:51:12.995842 | orchestrator | Tuesday 24 March 2026 04:51:10 +0000 (0:00:02.274) 0:01:51.179 ********* 2026-03-24 04:51:12.995850 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:51:12.995857 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:51:12.995864 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:51:12.995871 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:51:12.995878 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:51:12.995885 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:51:12.995892 | orchestrator | ok: [testbed-manager] 2026-03-24 04:51:12.995899 | orchestrator | 2026-03-24 04:51:12.995907 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 04:51:12.995918 | orchestrator | Tuesday 24 March 2026 04:51:12 +0000 (0:00:02.697) 0:01:53.877 ********* 2026-03-24 04:51:42.810850 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:51:42.810951 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:51:42.810963 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:51:42.810972 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:51:42.810981 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:51:42.810990 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:51:42.810998 | orchestrator | ok: [testbed-manager] 2026-03-24 04:51:42.811008 | orchestrator | 2026-03-24 04:51:42.811018 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 04:51:42.811028 | orchestrator | Tuesday 24 March 2026 04:51:15 +0000 (0:00:02.091) 0:01:55.968 ********* 2026-03-24 04:51:42.811036 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:51:42.811046 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:51:42.811054 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:51:42.811063 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:51:42.811072 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:42.811082 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:51:42.811090 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:51:42.811099 | orchestrator | 2026-03-24 04:51:42.811107 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 04:51:42.811116 | orchestrator | Tuesday 24 March 2026 04:51:17 +0000 (0:00:02.319) 0:01:58.288 ********* 2026-03-24 04:51:42.811124 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:42.811133 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:42.811142 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:42.811150 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.811159 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:42.811167 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:42.811175 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:42.811184 | orchestrator | 2026-03-24 04:51:42.811208 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 04:51:42.811223 | orchestrator | Tuesday 24 March 2026 04:51:19 +0000 (0:00:01.814) 0:02:00.102 ********* 2026-03-24 04:51:42.811295 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:42.811309 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:42.811323 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:42.811337 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.811350 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:42.811364 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:42.811378 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-24 04:51:42.811392 | orchestrator | 2026-03-24 04:51:42.811409 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 04:51:42.811425 | orchestrator | Tuesday 24 March 2026 04:51:21 +0000 (0:00:02.683) 0:02:02.786 ********* 2026-03-24 04:51:42.811441 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:42.811457 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:42.811472 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:42.811488 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.811506 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:42.811523 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:42.811539 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:42.811581 | orchestrator | 2026-03-24 04:51:42.811598 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 04:51:42.811614 | orchestrator | Tuesday 24 March 2026 04:51:23 +0000 (0:00:01.902) 0:02:04.688 ********* 2026-03-24 04:51:42.811632 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:51:42.811648 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-24 04:51:42.811664 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 04:51:42.811681 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 04:51:42.811696 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-24 04:51:42.811714 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 04:51:42.811729 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-24 04:51:42.811744 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-24 04:51:42.811758 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-24 04:51:42.811772 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-24 04:51:42.811787 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-24 04:51:42.811800 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-24 04:51:42.811814 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-24 04:51:42.811828 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-24 04:51:42.811843 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-24 04:51:42.811858 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-24 04:51:42.811874 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-24 04:51:42.811890 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 04:51:42.811905 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-24 04:51:42.811921 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-24 04:51:42.811936 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-24 04:51:42.811952 | orchestrator | 2026-03-24 04:51:42.811968 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 04:51:42.811984 | orchestrator | Tuesday 24 March 2026 04:51:26 +0000 (0:00:03.134) 0:02:07.822 ********* 2026-03-24 04:51:42.811999 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 04:51:42.812015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 04:51:42.812029 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 04:51:42.812045 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:42.812061 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 04:51:42.812077 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 04:51:42.812093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 04:51:42.812108 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:42.812184 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 04:51:42.812202 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 04:51:42.812264 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 04:51:42.812282 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:42.812298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 04:51:42.812313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 04:51:42.812329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 04:51:42.812345 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.812361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 04:51:42.812377 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 04:51:42.812391 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 04:51:42.812405 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:42.812435 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 04:51:42.812450 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 04:51:42.812464 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 04:51:42.812478 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:42.812492 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-24 04:51:42.812506 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-24 04:51:42.812520 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-24 04:51:42.812534 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:42.812547 | orchestrator | 2026-03-24 04:51:42.812560 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 04:51:42.812575 | orchestrator | Tuesday 24 March 2026 04:51:28 +0000 (0:00:01.897) 0:02:09.719 ********* 2026-03-24 04:51:42.812588 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:51:42.812602 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:51:42.812615 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:51:42.812628 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:51:42.812642 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:51:42.812656 | orchestrator | 2026-03-24 04:51:42.812670 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 04:51:42.812685 | orchestrator | Tuesday 24 March 2026 04:51:30 +0000 (0:00:02.083) 0:02:11.803 ********* 2026-03-24 04:51:42.812698 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.812711 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:42.812724 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:42.812737 | orchestrator | 2026-03-24 04:51:42.812750 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 04:51:42.812763 | orchestrator | Tuesday 24 March 2026 04:51:32 +0000 (0:00:01.428) 0:02:13.231 ********* 2026-03-24 04:51:42.812776 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.812790 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:42.812804 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:42.812816 | orchestrator | 2026-03-24 04:51:42.812830 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 04:51:42.812843 | orchestrator | Tuesday 24 March 2026 04:51:33 +0000 (0:00:01.320) 0:02:14.552 ********* 2026-03-24 04:51:42.812857 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.812870 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:51:42.812883 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:51:42.812898 | orchestrator | 2026-03-24 04:51:42.812913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 04:51:42.812927 | orchestrator | Tuesday 24 March 2026 04:51:34 +0000 (0:00:01.322) 0:02:15.874 ********* 2026-03-24 04:51:42.812941 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:51:42.812956 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:51:42.812970 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:51:42.812985 | orchestrator | 2026-03-24 04:51:42.813001 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 04:51:42.813016 | orchestrator | Tuesday 24 March 2026 04:51:36 +0000 (0:00:01.500) 0:02:17.375 ********* 2026-03-24 04:51:42.813032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 04:51:42.813047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 04:51:42.813062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 04:51:42.813074 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.813087 | orchestrator | 2026-03-24 04:51:42.813101 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 04:51:42.813116 | orchestrator | Tuesday 24 March 2026 04:51:38 +0000 (0:00:01.616) 0:02:18.991 ********* 2026-03-24 04:51:42.813130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 04:51:42.813156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 04:51:42.813171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 04:51:42.813185 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.813198 | orchestrator | 2026-03-24 04:51:42.813213 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 04:51:42.813332 | orchestrator | Tuesday 24 March 2026 04:51:39 +0000 (0:00:01.649) 0:02:20.641 ********* 2026-03-24 04:51:42.813346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 04:51:42.813354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 04:51:42.813363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 04:51:42.813372 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:51:42.813380 | orchestrator | 2026-03-24 04:51:42.813389 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 04:51:42.813398 | orchestrator | Tuesday 24 March 2026 04:51:41 +0000 (0:00:01.639) 0:02:22.281 ********* 2026-03-24 04:51:42.813406 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:51:42.813415 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:51:42.813423 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:51:42.813432 | orchestrator | 2026-03-24 04:51:42.813441 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 04:51:42.813464 | orchestrator | Tuesday 24 March 2026 04:51:42 +0000 (0:00:01.410) 0:02:23.691 ********* 2026-03-24 04:52:30.724108 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 04:52:30.724251 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 04:52:30.724263 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 04:52:30.724269 | orchestrator | 2026-03-24 04:52:30.724276 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 04:52:30.724282 | orchestrator | Tuesday 24 March 2026 04:51:44 +0000 (0:00:01.600) 0:02:25.292 ********* 2026-03-24 04:52:30.724289 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:52:30.724295 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:52:30.724302 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:52:30.724307 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 04:52:30.724313 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 04:52:30.724322 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 04:52:30.724331 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 04:52:30.724340 | orchestrator | 2026-03-24 04:52:30.724348 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 04:52:30.724372 | orchestrator | Tuesday 24 March 2026 04:51:46 +0000 (0:00:02.152) 0:02:27.444 ********* 2026-03-24 04:52:30.724382 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:52:30.724392 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:52:30.724401 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:52:30.724410 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 04:52:30.724419 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 04:52:30.724427 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 04:52:30.724436 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 04:52:30.724445 | orchestrator | 2026-03-24 04:52:30.724455 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-24 04:52:30.724461 | orchestrator | Tuesday 24 March 2026 04:51:49 +0000 (0:00:02.908) 0:02:30.352 ********* 2026-03-24 04:52:30.724485 | orchestrator | changed: [testbed-manager] 2026-03-24 04:52:30.724491 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:52:30.724497 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:52:30.724502 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:52:30.724507 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:52:30.724512 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:52:30.724518 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:52:30.724523 | orchestrator | 2026-03-24 04:52:30.724528 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-24 04:52:30.724533 | orchestrator | Tuesday 24 March 2026 04:52:00 +0000 (0:00:11.217) 0:02:41.570 ********* 2026-03-24 04:52:30.724539 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.724544 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.724549 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.724555 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.724560 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.724565 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.724570 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.724576 | orchestrator | 2026-03-24 04:52:30.724581 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-24 04:52:30.724586 | orchestrator | Tuesday 24 March 2026 04:52:02 +0000 (0:00:02.024) 0:02:43.595 ********* 2026-03-24 04:52:30.724592 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.724597 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.724603 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.724608 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.724613 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.724619 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.724624 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.724629 | orchestrator | 2026-03-24 04:52:30.724634 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-24 04:52:30.724640 | orchestrator | Tuesday 24 March 2026 04:52:04 +0000 (0:00:01.825) 0:02:45.421 ********* 2026-03-24 04:52:30.724645 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.724650 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:52:30.724656 | orchestrator | changed: [testbed-node-2] 2026-03-24 04:52:30.724661 | orchestrator | changed: [testbed-node-1] 2026-03-24 04:52:30.724666 | orchestrator | changed: [testbed-node-3] 2026-03-24 04:52:30.724672 | orchestrator | changed: [testbed-node-4] 2026-03-24 04:52:30.724679 | orchestrator | changed: [testbed-node-5] 2026-03-24 04:52:30.724685 | orchestrator | 2026-03-24 04:52:30.724691 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-24 04:52:30.724697 | orchestrator | Tuesday 24 March 2026 04:52:07 +0000 (0:00:02.878) 0:02:48.299 ********* 2026-03-24 04:52:30.724704 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-24 04:52:30.724712 | orchestrator | 2026-03-24 04:52:30.724718 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-24 04:52:30.724724 | orchestrator | Tuesday 24 March 2026 04:52:10 +0000 (0:00:02.833) 0:02:51.133 ********* 2026-03-24 04:52:30.724730 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.724736 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.724742 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.724748 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.724755 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.724774 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.724780 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.724786 | orchestrator | 2026-03-24 04:52:30.724792 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-24 04:52:30.724799 | orchestrator | Tuesday 24 March 2026 04:52:12 +0000 (0:00:01.857) 0:02:52.991 ********* 2026-03-24 04:52:30.724811 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.724818 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.724824 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.724830 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.724837 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.724843 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.724849 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.724855 | orchestrator | 2026-03-24 04:52:30.724862 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-24 04:52:30.724868 | orchestrator | Tuesday 24 March 2026 04:52:14 +0000 (0:00:02.064) 0:02:55.055 ********* 2026-03-24 04:52:30.724874 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.724881 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.724887 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.724893 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.724899 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.724905 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.724911 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.724918 | orchestrator | 2026-03-24 04:52:30.724928 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-24 04:52:30.724935 | orchestrator | Tuesday 24 March 2026 04:52:16 +0000 (0:00:02.443) 0:02:57.499 ********* 2026-03-24 04:52:30.724942 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.724948 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.724954 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.724960 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.724966 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.724973 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.724979 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.724985 | orchestrator | 2026-03-24 04:52:30.724992 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-24 04:52:30.724998 | orchestrator | Tuesday 24 March 2026 04:52:18 +0000 (0:00:02.298) 0:02:59.798 ********* 2026-03-24 04:52:30.725004 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.725010 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.725017 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.725023 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.725029 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.725035 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.725092 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.725098 | orchestrator | 2026-03-24 04:52:30.725104 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-24 04:52:30.725110 | orchestrator | Tuesday 24 March 2026 04:52:20 +0000 (0:00:01.849) 0:03:01.647 ********* 2026-03-24 04:52:30.725116 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.725122 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.725127 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.725133 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.725139 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.725144 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.725150 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.725156 | orchestrator | 2026-03-24 04:52:30.725162 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-24 04:52:30.725167 | orchestrator | Tuesday 24 March 2026 04:52:22 +0000 (0:00:01.856) 0:03:03.504 ********* 2026-03-24 04:52:30.725173 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.725179 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.725185 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.725208 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.725217 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.725222 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.725228 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.725240 | orchestrator | 2026-03-24 04:52:30.725246 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-24 04:52:30.725251 | orchestrator | Tuesday 24 March 2026 04:52:24 +0000 (0:00:01.828) 0:03:05.332 ********* 2026-03-24 04:52:30.725257 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.725263 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.725268 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.725274 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.725280 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.725285 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.725291 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.725296 | orchestrator | 2026-03-24 04:52:30.725302 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-24 04:52:30.725308 | orchestrator | Tuesday 24 March 2026 04:52:26 +0000 (0:00:02.243) 0:03:07.576 ********* 2026-03-24 04:52:30.725313 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.725319 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.725325 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.725330 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.725336 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.725341 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.725347 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.725353 | orchestrator | 2026-03-24 04:52:30.725359 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-24 04:52:30.725369 | orchestrator | Tuesday 24 March 2026 04:52:28 +0000 (0:00:02.001) 0:03:09.578 ********* 2026-03-24 04:52:30.725378 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:30.725388 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:30.725398 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:30.725409 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:30.725419 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:30.725428 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:30.725439 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:30.725445 | orchestrator | 2026-03-24 04:52:30.725451 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-24 04:52:30.725462 | orchestrator | Tuesday 24 March 2026 04:52:30 +0000 (0:00:02.027) 0:03:11.605 ********* 2026-03-24 04:52:55.569030 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.569203 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.569226 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.569254 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.570095 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.570121 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.570136 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.570151 | orchestrator | 2026-03-24 04:52:55.570167 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-24 04:52:55.570223 | orchestrator | Tuesday 24 March 2026 04:52:32 +0000 (0:00:02.011) 0:03:13.617 ********* 2026-03-24 04:52:55.570237 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.570250 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.570263 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.570277 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.570292 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.570306 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.570319 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.570332 | orchestrator | 2026-03-24 04:52:55.570345 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-24 04:52:55.570358 | orchestrator | Tuesday 24 March 2026 04:52:34 +0000 (0:00:01.940) 0:03:15.558 ********* 2026-03-24 04:52:55.570372 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.570386 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.570399 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.570432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 04:52:55.570476 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 04:52:55.570491 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.570505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 04:52:55.570517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 04:52:55.570530 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.570544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 04:52:55.570557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 04:52:55.570570 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.570584 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.570598 | orchestrator | 2026-03-24 04:52:55.570611 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-24 04:52:55.570625 | orchestrator | Tuesday 24 March 2026 04:52:36 +0000 (0:00:02.146) 0:03:17.704 ********* 2026-03-24 04:52:55.570638 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.570651 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.570664 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.570677 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.570691 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.570704 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.570716 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.570729 | orchestrator | 2026-03-24 04:52:55.570743 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-24 04:52:55.570755 | orchestrator | Tuesday 24 March 2026 04:52:38 +0000 (0:00:01.792) 0:03:19.497 ********* 2026-03-24 04:52:55.570770 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.570784 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.570797 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.570810 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.570823 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.570835 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.570849 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.570861 | orchestrator | 2026-03-24 04:52:55.570874 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-24 04:52:55.570888 | orchestrator | Tuesday 24 March 2026 04:52:40 +0000 (0:00:02.096) 0:03:21.594 ********* 2026-03-24 04:52:55.570901 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.570914 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.570927 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.570940 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.570952 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.570965 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.570979 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.570992 | orchestrator | 2026-03-24 04:52:55.571005 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-24 04:52:55.571018 | orchestrator | Tuesday 24 March 2026 04:52:42 +0000 (0:00:01.829) 0:03:23.424 ********* 2026-03-24 04:52:55.571031 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.571044 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.571056 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.571069 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.571082 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.571109 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.571122 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.571135 | orchestrator | 2026-03-24 04:52:55.571149 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-24 04:52:55.571162 | orchestrator | Tuesday 24 March 2026 04:52:44 +0000 (0:00:02.111) 0:03:25.535 ********* 2026-03-24 04:52:55.571232 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.571248 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.571284 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.571299 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.571313 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.571326 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.571339 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.571352 | orchestrator | 2026-03-24 04:52:55.571365 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-24 04:52:55.571378 | orchestrator | Tuesday 24 March 2026 04:52:46 +0000 (0:00:02.144) 0:03:27.680 ********* 2026-03-24 04:52:55.571392 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.571406 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.571418 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.571431 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.571444 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.571457 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.571470 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.571483 | orchestrator | 2026-03-24 04:52:55.571497 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-24 04:52:55.571510 | orchestrator | Tuesday 24 March 2026 04:52:48 +0000 (0:00:01.980) 0:03:29.660 ********* 2026-03-24 04:52:55.571524 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:52:55.571536 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:52:55.571550 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:52:55.571563 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:52:55.571583 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:52:55.571598 | orchestrator | 2026-03-24 04:52:55.571611 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-24 04:52:55.571625 | orchestrator | Tuesday 24 March 2026 04:52:51 +0000 (0:00:02.411) 0:03:32.072 ********* 2026-03-24 04:52:55.571638 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:52:55.571652 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:52:55.571664 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:52:55.571677 | orchestrator | 2026-03-24 04:52:55.571691 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-24 04:52:55.571705 | orchestrator | Tuesday 24 March 2026 04:52:52 +0000 (0:00:01.366) 0:03:33.439 ********* 2026-03-24 04:52:55.571719 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 04:52:55.571732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 04:52:55.571745 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.571758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 04:52:55.571771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 04:52:55.571785 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:52:55.571799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 04:52:55.571812 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 04:52:55.571835 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:52:55.571849 | orchestrator | 2026-03-24 04:52:55.571861 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-24 04:52:55.571875 | orchestrator | Tuesday 24 March 2026 04:52:53 +0000 (0:00:01.414) 0:03:34.853 ********* 2026-03-24 04:52:55.571892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}, 'ansible_loop_var': 'item'})  2026-03-24 04:52:55.571908 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}, 'ansible_loop_var': 'item'})  2026-03-24 04:52:55.571921 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:52:55.571935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}, 'ansible_loop_var': 'item'})  2026-03-24 04:52:55.571949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}, 'ansible_loop_var': 'item'})  2026-03-24 04:52:55.571969 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:03.120782 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:03.120889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:03.120907 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:03.120921 | orchestrator | 2026-03-24 04:53:03.120934 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-24 04:53:03.120946 | orchestrator | Tuesday 24 March 2026 04:52:55 +0000 (0:00:01.599) 0:03:36.453 ********* 2026-03-24 04:53:03.120958 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:03.120970 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:03.120996 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:03.121007 | orchestrator | 2026-03-24 04:53:03.121019 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-24 04:53:03.121030 | orchestrator | Tuesday 24 March 2026 04:52:56 +0000 (0:00:01.355) 0:03:37.808 ********* 2026-03-24 04:53:03.121041 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:03.121052 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:03.121063 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:03.121074 | orchestrator | 2026-03-24 04:53:03.121085 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-24 04:53:03.121096 | orchestrator | Tuesday 24 March 2026 04:52:58 +0000 (0:00:01.291) 0:03:39.100 ********* 2026-03-24 04:53:03.121107 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:03.121118 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:03.121149 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:03.121161 | orchestrator | 2026-03-24 04:53:03.121203 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-24 04:53:03.121215 | orchestrator | Tuesday 24 March 2026 04:52:59 +0000 (0:00:01.319) 0:03:40.420 ********* 2026-03-24 04:53:03.121226 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:03.121237 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:03.121248 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:03.121258 | orchestrator | 2026-03-24 04:53:03.121269 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-24 04:53:03.121280 | orchestrator | Tuesday 24 March 2026 04:53:00 +0000 (0:00:01.336) 0:03:41.756 ********* 2026-03-24 04:53:03.121291 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}) 2026-03-24 04:53:03.121304 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}) 2026-03-24 04:53:03.121315 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}) 2026-03-24 04:53:03.121326 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}) 2026-03-24 04:53:03.121337 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}) 2026-03-24 04:53:03.121347 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}) 2026-03-24 04:53:03.121358 | orchestrator | 2026-03-24 04:53:03.121369 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-24 04:53:03.121381 | orchestrator | Tuesday 24 March 2026 04:53:02 +0000 (0:00:02.011) 0:03:43.767 ********* 2026-03-24 04:53:03.121415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4d21def1-f46f-5673-adc8-800ee07d688b/osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1774320579.658155, 'mtime': 1774320579.655155, 'ctime': 1774320579.655155, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4d21def1-f46f-5673-adc8-800ee07d688b/osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:03.121438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80/osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1774320600.3685172, 'mtime': 1774320600.3655171, 'ctime': 1774320600.3655171, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80/osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:03.121459 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:03.121472 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4d735645-9e18-5d04-8028-1696940918c0/osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774320577.6021216, 'mtime': 1774320577.5981216, 'ctime': 1774320577.5981216, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4d735645-9e18-5d04-8028-1696940918c0/osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:03.121493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a329e066-8536-5438-99e1-d9cc3f91f537/osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774320597.198486, 'mtime': 1774320597.1934862, 'ctime': 1774320597.1934862, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a329e066-8536-5438-99e1-d9cc3f91f537/osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.888990 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:08.889145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7dc39596-c9fc-583d-89f8-392d010fb80f/osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1774320578.8693473, 'mtime': 1774320578.8653474, 'ctime': 1774320578.8653474, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7dc39596-c9fc-583d-89f8-392d010fb80f/osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59/osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1774320598.2186837, 'mtime': 1774320598.2146838, 'ctime': 1774320598.2146838, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59/osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889272 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:08.889284 | orchestrator | 2026-03-24 04:53:08.889297 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-24 04:53:08.889309 | orchestrator | Tuesday 24 March 2026 04:53:04 +0000 (0:00:01.410) 0:03:45.178 ********* 2026-03-24 04:53:08.889321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 04:53:08.889334 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 04:53:08.889345 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:08.889356 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 04:53:08.889367 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 04:53:08.889378 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:08.889389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 04:53:08.889400 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 04:53:08.889411 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:08.889422 | orchestrator | 2026-03-24 04:53:08.889433 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-24 04:53:08.889462 | orchestrator | Tuesday 24 March 2026 04:53:05 +0000 (0:00:01.475) 0:03:46.654 ********* 2026-03-24 04:53:08.889485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889515 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:08.889530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889544 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889557 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:08.889570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889595 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:08.889608 | orchestrator | 2026-03-24 04:53:08.889622 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-24 04:53:08.889633 | orchestrator | Tuesday 24 March 2026 04:53:07 +0000 (0:00:01.360) 0:03:48.015 ********* 2026-03-24 04:53:08.889644 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'})  2026-03-24 04:53:08.889655 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'})  2026-03-24 04:53:08.889666 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:08.889677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'})  2026-03-24 04:53:08.889687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'})  2026-03-24 04:53:08.889698 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:08.889709 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'})  2026-03-24 04:53:08.889719 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'})  2026-03-24 04:53:08.889730 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:08.889741 | orchestrator | 2026-03-24 04:53:08.889752 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-24 04:53:08.889771 | orchestrator | Tuesday 24 March 2026 04:53:08 +0000 (0:00:01.629) 0:03:49.644 ********* 2026-03-24 04:53:08.889783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4d21def1-f46f-5673-adc8-800ee07d688b', 'data_vg': 'ceph-4d21def1-f46f-5673-adc8-800ee07d688b'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:08.889801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-d7857bb6-ee47-5754-bddf-a4c3c3300a80', 'data_vg': 'ceph-d7857bb6-ee47-5754-bddf-a4c3c3300a80'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:18.105737 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:18.105820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4d735645-9e18-5d04-8028-1696940918c0', 'data_vg': 'ceph-4d735645-9e18-5d04-8028-1696940918c0'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:18.105843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a329e066-8536-5438-99e1-d9cc3f91f537', 'data_vg': 'ceph-a329e066-8536-5438-99e1-d9cc3f91f537'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:18.105850 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:18.105857 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7dc39596-c9fc-583d-89f8-392d010fb80f', 'data_vg': 'ceph-7dc39596-c9fc-583d-89f8-392d010fb80f'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:18.105863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7e9350b0-7da1-52b7-a847-2b8ea41c8f59', 'data_vg': 'ceph-7e9350b0-7da1-52b7-a847-2b8ea41c8f59'}, 'ansible_loop_var': 'item'})  2026-03-24 04:53:18.105868 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:18.105874 | orchestrator | 2026-03-24 04:53:18.105881 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-24 04:53:18.105888 | orchestrator | Tuesday 24 March 2026 04:53:10 +0000 (0:00:01.496) 0:03:51.141 ********* 2026-03-24 04:53:18.105893 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:18.105899 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:18.105904 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:18.105910 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:18.105915 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:18.105920 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:18.105926 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:18.105931 | orchestrator | 2026-03-24 04:53:18.105937 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-24 04:53:18.105943 | orchestrator | Tuesday 24 March 2026 04:53:12 +0000 (0:00:01.868) 0:03:53.009 ********* 2026-03-24 04:53:18.105948 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:18.105953 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:18.105959 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:18.105964 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:18.105970 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 04:53:18.105976 | orchestrator | 2026-03-24 04:53:18.105981 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-24 04:53:18.105987 | orchestrator | Tuesday 24 March 2026 04:53:14 +0000 (0:00:02.464) 0:03:55.474 ********* 2026-03-24 04:53:18.106009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106087 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:18.106092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106120 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:18.106125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106260 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:18.106269 | orchestrator | 2026-03-24 04:53:18.106283 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-24 04:53:18.106292 | orchestrator | Tuesday 24 March 2026 04:53:16 +0000 (0:00:01.523) 0:03:56.998 ********* 2026-03-24 04:53:18.106301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106347 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:18.106356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106411 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:18.106417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106453 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:18.106461 | orchestrator | 2026-03-24 04:53:18.106471 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-24 04:53:18.106480 | orchestrator | Tuesday 24 March 2026 04:53:17 +0000 (0:00:01.746) 0:03:58.744 ********* 2026-03-24 04:53:18.106491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106540 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:18.106547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:18.106567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504452 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.504462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 04:53:34.504571 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.504579 | orchestrator | 2026-03-24 04:53:34.504589 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-24 04:53:34.504598 | orchestrator | Tuesday 24 March 2026 04:53:19 +0000 (0:00:01.549) 0:04:00.294 ********* 2026-03-24 04:53:34.504606 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.504614 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.504621 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.504629 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:34.504637 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.504644 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.504652 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:34.504659 | orchestrator | 2026-03-24 04:53:34.504668 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-24 04:53:34.504676 | orchestrator | Tuesday 24 March 2026 04:53:21 +0000 (0:00:02.186) 0:04:02.480 ********* 2026-03-24 04:53:34.504683 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.504691 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.504698 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.504706 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:34.504714 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.504721 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.504729 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:34.504736 | orchestrator | 2026-03-24 04:53:34.504744 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-24 04:53:34.504752 | orchestrator | Tuesday 24 March 2026 04:53:23 +0000 (0:00:02.101) 0:04:04.582 ********* 2026-03-24 04:53:34.504760 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.504767 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.504775 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.504782 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:34.504790 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.504798 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.504805 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:34.504813 | orchestrator | 2026-03-24 04:53:34.504821 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-24 04:53:34.504830 | orchestrator | Tuesday 24 March 2026 04:53:25 +0000 (0:00:02.085) 0:04:06.668 ********* 2026-03-24 04:53:34.504838 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.504845 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.504853 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.504861 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:34.504868 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.504876 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.504884 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:34.504891 | orchestrator | 2026-03-24 04:53:34.504899 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-24 04:53:34.504907 | orchestrator | Tuesday 24 March 2026 04:53:27 +0000 (0:00:01.862) 0:04:08.530 ********* 2026-03-24 04:53:34.504915 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.504923 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.504931 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.504938 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:34.504946 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.504954 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.504962 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:34.505000 | orchestrator | 2026-03-24 04:53:34.505009 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-24 04:53:34.505016 | orchestrator | Tuesday 24 March 2026 04:53:29 +0000 (0:00:02.003) 0:04:10.534 ********* 2026-03-24 04:53:34.505024 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.505032 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.505039 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.505047 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:34.505055 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.505063 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.505070 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:34.505078 | orchestrator | 2026-03-24 04:53:34.505086 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-24 04:53:34.505094 | orchestrator | Tuesday 24 March 2026 04:53:31 +0000 (0:00:01.924) 0:04:12.459 ********* 2026-03-24 04:53:34.505102 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.505109 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.505117 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.505125 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:34.505134 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:34.505141 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:34.505173 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:34.505181 | orchestrator | 2026-03-24 04:53:34.505214 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-24 04:53:34.505222 | orchestrator | Tuesday 24 March 2026 04:53:33 +0000 (0:00:02.067) 0:04:14.527 ********* 2026-03-24 04:53:34.505230 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:34.505240 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:34.505257 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:34.505267 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:34.505276 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:34.505286 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:34.505294 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:34.505302 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:34.505310 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:34.505318 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:34.505326 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:34.505334 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:34.505380 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:34.505388 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:34.505396 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:34.505404 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:34.505412 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:34.505419 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:34.505427 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:34.505435 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:34.505443 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:34.505451 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:34.505459 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:34.505474 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:38.430470 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:38.430578 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:38.430606 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:38.430615 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:38.430623 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:38.430632 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:38.430641 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:38.430648 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:38.430656 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:38.430684 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:38.430693 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:38.430701 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:38.430709 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:38.430718 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:38.430726 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:38.430735 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:38.430744 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:38.430752 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:38.430761 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:38.430769 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:38.430778 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:38.430786 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:38.430795 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:38.430804 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:38.430812 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:38.430821 | orchestrator | 2026-03-24 04:53:38.430848 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-24 04:53:38.430858 | orchestrator | Tuesday 24 March 2026 04:53:35 +0000 (0:00:02.147) 0:04:16.675 ********* 2026-03-24 04:53:38.430867 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:38.430876 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:38.430884 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:38.430893 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:38.430901 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:38.430910 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:38.430919 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:38.430927 | orchestrator | 2026-03-24 04:53:38.430941 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-24 04:53:38.430950 | orchestrator | Tuesday 24 March 2026 04:53:37 +0000 (0:00:02.106) 0:04:18.782 ********* 2026-03-24 04:53:38.430959 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:38.430974 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:38.430982 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:38.430991 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:38.431000 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:38.431009 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:38.431017 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:38.431027 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:38.431037 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:38.431047 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:38.431056 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:38.431066 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:38.431076 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:38.431086 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:38.431095 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:38.431105 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:38.431116 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:38.431125 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:38.431135 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:38.431174 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:38.431184 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:38.431198 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:57.939117 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:57.939294 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:57.939323 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:57.939343 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:57.939369 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:57.939390 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:57.939407 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:57.939428 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:57.939447 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:57.939466 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:57.939478 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:57.939490 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:57.939502 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:57.939513 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:57.939524 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:57.939542 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:57.939560 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-24 04:53:57.939579 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:57.939596 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-24 04:53:57.939614 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-24 04:53:57.939632 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-24 04:53:57.939684 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:57.939702 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:57.939744 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-24 04:53:57.939763 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:57.939785 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-24 04:53:57.939809 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:57.939829 | orchestrator | 2026-03-24 04:53:57.939860 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-24 04:53:57.939885 | orchestrator | Tuesday 24 March 2026 04:53:40 +0000 (0:00:02.140) 0:04:20.923 ********* 2026-03-24 04:53:57.939903 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:57.939922 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:57.939941 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:57.939960 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:57.939977 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:57.939994 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:57.940014 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:57.940028 | orchestrator | 2026-03-24 04:53:57.940039 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-24 04:53:57.940050 | orchestrator | Tuesday 24 March 2026 04:53:42 +0000 (0:00:02.176) 0:04:23.099 ********* 2026-03-24 04:53:57.940061 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:57.940071 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:57.940082 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:57.940092 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:57.940103 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:57.940113 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:57.940124 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:57.940167 | orchestrator | 2026-03-24 04:53:57.940178 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-24 04:53:57.940189 | orchestrator | Tuesday 24 March 2026 04:53:44 +0000 (0:00:02.002) 0:04:25.102 ********* 2026-03-24 04:53:57.940200 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:57.940211 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:57.940221 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:57.940232 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:57.940243 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:57.940254 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:57.940265 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:57.940275 | orchestrator | 2026-03-24 04:53:57.940286 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-24 04:53:57.940297 | orchestrator | Tuesday 24 March 2026 04:53:46 +0000 (0:00:02.373) 0:04:27.476 ********* 2026-03-24 04:53:57.940309 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-24 04:53:57.940322 | orchestrator | 2026-03-24 04:53:57.940357 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-24 04:53:57.940379 | orchestrator | Tuesday 24 March 2026 04:53:49 +0000 (0:00:02.713) 0:04:30.189 ********* 2026-03-24 04:53:57.940402 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-24 04:53:57.940426 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-24 04:53:57.940437 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-24 04:53:57.940447 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-24 04:53:57.940458 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-24 04:53:57.940469 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-24 04:53:57.940480 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-24 04:53:57.940491 | orchestrator | 2026-03-24 04:53:57.940502 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-24 04:53:57.940512 | orchestrator | Tuesday 24 March 2026 04:53:51 +0000 (0:00:02.053) 0:04:32.243 ********* 2026-03-24 04:53:57.940523 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:57.940534 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:57.940545 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:57.940556 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:57.940566 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:57.940577 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:57.940588 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:57.940598 | orchestrator | 2026-03-24 04:53:57.940609 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-24 04:53:57.940620 | orchestrator | Tuesday 24 March 2026 04:53:53 +0000 (0:00:02.080) 0:04:34.324 ********* 2026-03-24 04:53:57.940631 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:53:57.940642 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:53:57.940652 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:53:57.940663 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:53:57.940674 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:53:57.940684 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:53:57.940695 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:53:57.940706 | orchestrator | 2026-03-24 04:53:57.940717 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-24 04:53:57.940728 | orchestrator | Tuesday 24 March 2026 04:53:55 +0000 (0:00:01.934) 0:04:36.259 ********* 2026-03-24 04:53:57.940739 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:53:57.940751 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:53:57.940761 | orchestrator | ok: [testbed-node-2] 2026-03-24 04:53:57.940772 | orchestrator | ok: [testbed-node-3] 2026-03-24 04:53:57.940783 | orchestrator | ok: [testbed-node-4] 2026-03-24 04:53:57.940793 | orchestrator | ok: [testbed-node-5] 2026-03-24 04:53:57.940815 | orchestrator | ok: [testbed-manager] 2026-03-24 04:54:43.011448 | orchestrator | 2026-03-24 04:54:43.011551 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-24 04:54:43.011562 | orchestrator | Tuesday 24 March 2026 04:53:57 +0000 (0:00:02.565) 0:04:38.824 ********* 2026-03-24 04:54:43.011568 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:54:43.011573 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:54:43.011577 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:54:43.011581 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:54:43.011585 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:54:43.011601 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:54:43.011605 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:54:43.011609 | orchestrator | 2026-03-24 04:54:43.011613 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-24 04:54:43.011617 | orchestrator | Tuesday 24 March 2026 04:54:00 +0000 (0:00:02.272) 0:04:41.097 ********* 2026-03-24 04:54:43.011621 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:54:43.011625 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:54:43.011629 | orchestrator | skipping: [testbed-node-2] 2026-03-24 04:54:43.011648 | orchestrator | skipping: [testbed-node-3] 2026-03-24 04:54:43.011655 | orchestrator | skipping: [testbed-node-4] 2026-03-24 04:54:43.011659 | orchestrator | skipping: [testbed-node-5] 2026-03-24 04:54:43.011663 | orchestrator | skipping: [testbed-manager] 2026-03-24 04:54:43.011667 | orchestrator | 2026-03-24 04:54:43.011671 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-24 04:54:43.011674 | orchestrator | Tuesday 24 March 2026 04:54:02 +0000 (0:00:02.289) 0:04:43.386 ********* 2026-03-24 04:54:43.011678 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011683 | orchestrator | 2026-03-24 04:54:43.011687 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-24 04:54:43.011691 | orchestrator | Tuesday 24 March 2026 04:54:05 +0000 (0:00:02.613) 0:04:45.999 ********* 2026-03-24 04:54:43.011695 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:54:43.011699 | orchestrator | 2026-03-24 04:54:43.011702 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-24 04:54:43.011706 | orchestrator | 2026-03-24 04:54:43.011710 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 04:54:43.011714 | orchestrator | Tuesday 24 March 2026 04:54:06 +0000 (0:00:01.710) 0:04:47.710 ********* 2026-03-24 04:54:43.011718 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011721 | orchestrator | 2026-03-24 04:54:43.011725 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 04:54:43.011729 | orchestrator | Tuesday 24 March 2026 04:54:08 +0000 (0:00:01.458) 0:04:49.168 ********* 2026-03-24 04:54:43.011732 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011736 | orchestrator | 2026-03-24 04:54:43.011740 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-24 04:54:43.011744 | orchestrator | Tuesday 24 March 2026 04:54:09 +0000 (0:00:01.133) 0:04:50.301 ********* 2026-03-24 04:54:43.011750 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-24 04:54:43.011756 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-24 04:54:43.011760 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-24 04:54:43.011764 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-24 04:54:43.011770 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-24 04:54:43.011784 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}])  2026-03-24 04:54:43.011794 | orchestrator | 2026-03-24 04:54:43.011798 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-24 04:54:43.011802 | orchestrator | 2026-03-24 04:54:43.011806 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-24 04:54:43.011810 | orchestrator | Tuesday 24 March 2026 04:54:19 +0000 (0:00:10.251) 0:05:00.552 ********* 2026-03-24 04:54:43.011816 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011820 | orchestrator | 2026-03-24 04:54:43.011824 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-24 04:54:43.011827 | orchestrator | Tuesday 24 March 2026 04:54:21 +0000 (0:00:01.554) 0:05:02.106 ********* 2026-03-24 04:54:43.011831 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011835 | orchestrator | 2026-03-24 04:54:43.011838 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-24 04:54:43.011842 | orchestrator | Tuesday 24 March 2026 04:54:22 +0000 (0:00:01.166) 0:05:03.273 ********* 2026-03-24 04:54:43.011846 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:54:43.011850 | orchestrator | 2026-03-24 04:54:43.011853 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-24 04:54:43.011857 | orchestrator | Tuesday 24 March 2026 04:54:23 +0000 (0:00:01.130) 0:05:04.404 ********* 2026-03-24 04:54:43.011861 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011865 | orchestrator | 2026-03-24 04:54:43.011868 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 04:54:43.011872 | orchestrator | Tuesday 24 March 2026 04:54:24 +0000 (0:00:01.128) 0:05:05.533 ********* 2026-03-24 04:54:43.011876 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-24 04:54:43.011880 | orchestrator | 2026-03-24 04:54:43.011883 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 04:54:43.011887 | orchestrator | Tuesday 24 March 2026 04:54:25 +0000 (0:00:01.174) 0:05:06.707 ********* 2026-03-24 04:54:43.011891 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011895 | orchestrator | 2026-03-24 04:54:43.011899 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 04:54:43.011902 | orchestrator | Tuesday 24 March 2026 04:54:27 +0000 (0:00:01.483) 0:05:08.191 ********* 2026-03-24 04:54:43.011906 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011910 | orchestrator | 2026-03-24 04:54:43.011914 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 04:54:43.011917 | orchestrator | Tuesday 24 March 2026 04:54:28 +0000 (0:00:01.138) 0:05:09.329 ********* 2026-03-24 04:54:43.011921 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011925 | orchestrator | 2026-03-24 04:54:43.011929 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 04:54:43.011932 | orchestrator | Tuesday 24 March 2026 04:54:29 +0000 (0:00:01.458) 0:05:10.788 ********* 2026-03-24 04:54:43.011936 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011940 | orchestrator | 2026-03-24 04:54:43.011944 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 04:54:43.011947 | orchestrator | Tuesday 24 March 2026 04:54:31 +0000 (0:00:01.124) 0:05:11.913 ********* 2026-03-24 04:54:43.011951 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011955 | orchestrator | 2026-03-24 04:54:43.011959 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 04:54:43.011963 | orchestrator | Tuesday 24 March 2026 04:54:32 +0000 (0:00:01.127) 0:05:13.040 ********* 2026-03-24 04:54:43.011966 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.011970 | orchestrator | 2026-03-24 04:54:43.011974 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 04:54:43.011981 | orchestrator | Tuesday 24 March 2026 04:54:33 +0000 (0:00:01.144) 0:05:14.185 ********* 2026-03-24 04:54:43.011985 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:54:43.011989 | orchestrator | 2026-03-24 04:54:43.011993 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 04:54:43.011996 | orchestrator | Tuesday 24 March 2026 04:54:34 +0000 (0:00:01.152) 0:05:15.338 ********* 2026-03-24 04:54:43.012000 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.012004 | orchestrator | 2026-03-24 04:54:43.012008 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 04:54:43.012012 | orchestrator | Tuesday 24 March 2026 04:54:35 +0000 (0:00:01.157) 0:05:16.495 ********* 2026-03-24 04:54:43.012017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:54:43.012022 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:54:43.012026 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:54:43.012030 | orchestrator | 2026-03-24 04:54:43.012035 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 04:54:43.012039 | orchestrator | Tuesday 24 March 2026 04:54:37 +0000 (0:00:01.629) 0:05:18.125 ********* 2026-03-24 04:54:43.012043 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:54:43.012048 | orchestrator | 2026-03-24 04:54:43.012052 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 04:54:43.012056 | orchestrator | Tuesday 24 March 2026 04:54:38 +0000 (0:00:01.243) 0:05:19.368 ********* 2026-03-24 04:54:43.012061 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:54:43.012065 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:54:43.012069 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:54:43.012074 | orchestrator | 2026-03-24 04:54:43.012079 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 04:54:43.012083 | orchestrator | Tuesday 24 March 2026 04:54:41 +0000 (0:00:03.118) 0:05:22.487 ********* 2026-03-24 04:54:43.012088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 04:54:43.012092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 04:54:43.012099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 04:55:05.852677 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.852779 | orchestrator | 2026-03-24 04:55:05.852792 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 04:55:05.852803 | orchestrator | Tuesday 24 March 2026 04:54:42 +0000 (0:00:01.408) 0:05:23.896 ********* 2026-03-24 04:55:05.852827 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 04:55:05.852838 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 04:55:05.852846 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 04:55:05.852855 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.852863 | orchestrator | 2026-03-24 04:55:05.852871 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 04:55:05.852879 | orchestrator | Tuesday 24 March 2026 04:54:44 +0000 (0:00:01.873) 0:05:25.769 ********* 2026-03-24 04:55:05.852889 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:05.852918 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:05.852927 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:05.852935 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.852943 | orchestrator | 2026-03-24 04:55:05.852951 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 04:55:05.852959 | orchestrator | Tuesday 24 March 2026 04:54:46 +0000 (0:00:01.192) 0:05:26.962 ********* 2026-03-24 04:55:05.852969 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cefde431640e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 04:54:38.997978', 'end': '2026-03-24 04:54:39.036194', 'delta': '0:00:00.038216', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cefde431640e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 04:55:05.852994 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '4f8b0ade79f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 04:54:39.562870', 'end': '2026-03-24 04:54:39.623357', 'delta': '0:00:00.060487', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f8b0ade79f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 04:55:05.853008 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cce21668b5d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 04:54:40.390783', 'end': '2026-03-24 04:54:40.431802', 'delta': '0:00:00.041019', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cce21668b5d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 04:55:05.853016 | orchestrator | 2026-03-24 04:55:05.853025 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 04:55:05.853033 | orchestrator | Tuesday 24 March 2026 04:54:47 +0000 (0:00:01.186) 0:05:28.149 ********* 2026-03-24 04:55:05.853047 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:55:05.853056 | orchestrator | 2026-03-24 04:55:05.853064 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 04:55:05.853072 | orchestrator | Tuesday 24 March 2026 04:54:48 +0000 (0:00:01.538) 0:05:29.687 ********* 2026-03-24 04:55:05.853080 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853141 | orchestrator | 2026-03-24 04:55:05.853152 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 04:55:05.853160 | orchestrator | Tuesday 24 March 2026 04:54:50 +0000 (0:00:01.232) 0:05:30.920 ********* 2026-03-24 04:55:05.853168 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:55:05.853176 | orchestrator | 2026-03-24 04:55:05.853184 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 04:55:05.853192 | orchestrator | Tuesday 24 March 2026 04:54:51 +0000 (0:00:01.164) 0:05:32.085 ********* 2026-03-24 04:55:05.853200 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-24 04:55:05.853208 | orchestrator | 2026-03-24 04:55:05.853217 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 04:55:05.853226 | orchestrator | Tuesday 24 March 2026 04:54:53 +0000 (0:00:02.180) 0:05:34.265 ********* 2026-03-24 04:55:05.853235 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:55:05.853244 | orchestrator | 2026-03-24 04:55:05.853254 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 04:55:05.853263 | orchestrator | Tuesday 24 March 2026 04:54:54 +0000 (0:00:01.122) 0:05:35.388 ********* 2026-03-24 04:55:05.853272 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853281 | orchestrator | 2026-03-24 04:55:05.853289 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 04:55:05.853298 | orchestrator | Tuesday 24 March 2026 04:54:55 +0000 (0:00:01.113) 0:05:36.501 ********* 2026-03-24 04:55:05.853307 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853316 | orchestrator | 2026-03-24 04:55:05.853325 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 04:55:05.853335 | orchestrator | Tuesday 24 March 2026 04:54:56 +0000 (0:00:01.225) 0:05:37.727 ********* 2026-03-24 04:55:05.853344 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853353 | orchestrator | 2026-03-24 04:55:05.853362 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 04:55:05.853371 | orchestrator | Tuesday 24 March 2026 04:54:57 +0000 (0:00:01.104) 0:05:38.831 ********* 2026-03-24 04:55:05.853380 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853389 | orchestrator | 2026-03-24 04:55:05.853398 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 04:55:05.853407 | orchestrator | Tuesday 24 March 2026 04:54:59 +0000 (0:00:01.105) 0:05:39.937 ********* 2026-03-24 04:55:05.853416 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853426 | orchestrator | 2026-03-24 04:55:05.853434 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 04:55:05.853443 | orchestrator | Tuesday 24 March 2026 04:55:00 +0000 (0:00:01.127) 0:05:41.064 ********* 2026-03-24 04:55:05.853455 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853469 | orchestrator | 2026-03-24 04:55:05.853483 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 04:55:05.853497 | orchestrator | Tuesday 24 March 2026 04:55:01 +0000 (0:00:01.137) 0:05:42.202 ********* 2026-03-24 04:55:05.853511 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853526 | orchestrator | 2026-03-24 04:55:05.853540 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 04:55:05.853555 | orchestrator | Tuesday 24 March 2026 04:55:02 +0000 (0:00:01.090) 0:05:43.292 ********* 2026-03-24 04:55:05.853571 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853582 | orchestrator | 2026-03-24 04:55:05.853592 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 04:55:05.853601 | orchestrator | Tuesday 24 March 2026 04:55:03 +0000 (0:00:01.108) 0:05:44.400 ********* 2026-03-24 04:55:05.853618 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:05.853626 | orchestrator | 2026-03-24 04:55:05.853634 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 04:55:05.853641 | orchestrator | Tuesday 24 March 2026 04:55:04 +0000 (0:00:01.099) 0:05:45.500 ********* 2026-03-24 04:55:05.853658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 04:55:07.049648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 04:55:07.049752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 04:55:07.049777 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:07.049790 | orchestrator | 2026-03-24 04:55:07.049802 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 04:55:07.049815 | orchestrator | Tuesday 24 March 2026 04:55:05 +0000 (0:00:01.240) 0:05:46.741 ********* 2026-03-24 04:55:07.049828 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:07.049841 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:07.049860 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:07.049885 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:20.827164 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:20.827267 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:20.827279 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:20.827319 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:20.827350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:20.827359 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 04:55:20.827367 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:20.827377 | orchestrator | 2026-03-24 04:55:20.827385 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 04:55:20.827394 | orchestrator | Tuesday 24 March 2026 04:55:07 +0000 (0:00:01.201) 0:05:47.942 ********* 2026-03-24 04:55:20.827402 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:55:20.827410 | orchestrator | 2026-03-24 04:55:20.827418 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 04:55:20.827425 | orchestrator | Tuesday 24 March 2026 04:55:08 +0000 (0:00:01.523) 0:05:49.466 ********* 2026-03-24 04:55:20.827433 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:55:20.827440 | orchestrator | 2026-03-24 04:55:20.827447 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 04:55:20.827455 | orchestrator | Tuesday 24 March 2026 04:55:09 +0000 (0:00:01.111) 0:05:50.577 ********* 2026-03-24 04:55:20.827474 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:55:20.827487 | orchestrator | 2026-03-24 04:55:20.827499 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 04:55:20.827511 | orchestrator | Tuesday 24 March 2026 04:55:11 +0000 (0:00:01.448) 0:05:52.026 ********* 2026-03-24 04:55:20.827523 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:20.827535 | orchestrator | 2026-03-24 04:55:20.827547 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 04:55:20.827559 | orchestrator | Tuesday 24 March 2026 04:55:12 +0000 (0:00:01.098) 0:05:53.125 ********* 2026-03-24 04:55:20.827571 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:20.827584 | orchestrator | 2026-03-24 04:55:20.827595 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 04:55:20.827607 | orchestrator | Tuesday 24 March 2026 04:55:13 +0000 (0:00:01.201) 0:05:54.326 ********* 2026-03-24 04:55:20.827615 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:20.827622 | orchestrator | 2026-03-24 04:55:20.827629 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 04:55:20.827636 | orchestrator | Tuesday 24 March 2026 04:55:14 +0000 (0:00:01.118) 0:05:55.444 ********* 2026-03-24 04:55:20.827643 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:55:20.827651 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 04:55:20.827658 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 04:55:20.827665 | orchestrator | 2026-03-24 04:55:20.827672 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 04:55:20.827679 | orchestrator | Tuesday 24 March 2026 04:55:16 +0000 (0:00:01.945) 0:05:57.390 ********* 2026-03-24 04:55:20.827688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 04:55:20.827696 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 04:55:20.827704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 04:55:20.827713 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:20.827721 | orchestrator | 2026-03-24 04:55:20.827729 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 04:55:20.827737 | orchestrator | Tuesday 24 March 2026 04:55:17 +0000 (0:00:01.149) 0:05:58.540 ********* 2026-03-24 04:55:20.827745 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:55:20.827753 | orchestrator | 2026-03-24 04:55:20.827762 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 04:55:20.827770 | orchestrator | Tuesday 24 March 2026 04:55:18 +0000 (0:00:01.137) 0:05:59.677 ********* 2026-03-24 04:55:20.827778 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:55:20.827786 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:55:20.827795 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:55:20.827803 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 04:55:20.827817 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 04:55:20.827832 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 04:56:21.150910 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 04:56:21.151004 | orchestrator | 2026-03-24 04:56:21.151014 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 04:56:21.151023 | orchestrator | Tuesday 24 March 2026 04:55:20 +0000 (0:00:02.037) 0:06:01.715 ********* 2026-03-24 04:56:21.151030 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:56:21.151037 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:56:21.151043 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:56:21.151099 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 04:56:21.151124 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 04:56:21.151131 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 04:56:21.151138 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 04:56:21.151148 | orchestrator | 2026-03-24 04:56:21.151159 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-24 04:56:21.151169 | orchestrator | Tuesday 24 March 2026 04:55:23 +0000 (0:00:02.706) 0:06:04.421 ********* 2026-03-24 04:56:21.151185 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-24 04:56:21.151196 | orchestrator | 2026-03-24 04:56:21.151205 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-24 04:56:21.151215 | orchestrator | Tuesday 24 March 2026 04:55:25 +0000 (0:00:02.348) 0:06:06.770 ********* 2026-03-24 04:56:21.151225 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.151236 | orchestrator | 2026-03-24 04:56:21.151246 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-24 04:56:21.151256 | orchestrator | Tuesday 24 March 2026 04:55:27 +0000 (0:00:01.212) 0:06:07.982 ********* 2026-03-24 04:56:21.151266 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.151276 | orchestrator | 2026-03-24 04:56:21.151286 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-24 04:56:21.151296 | orchestrator | Tuesday 24 March 2026 04:55:28 +0000 (0:00:01.129) 0:06:09.112 ********* 2026-03-24 04:56:21.151306 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-24 04:56:21.151316 | orchestrator | 2026-03-24 04:56:21.151327 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-24 04:56:21.151337 | orchestrator | Tuesday 24 March 2026 04:55:30 +0000 (0:00:02.331) 0:06:11.444 ********* 2026-03-24 04:56:21.151349 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.151359 | orchestrator | 2026-03-24 04:56:21.151370 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-24 04:56:21.151382 | orchestrator | Tuesday 24 March 2026 04:55:31 +0000 (0:00:01.113) 0:06:12.558 ********* 2026-03-24 04:56:21.151392 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:56:21.151402 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 04:56:21.151411 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:56:21.151421 | orchestrator | 2026-03-24 04:56:21.151431 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-24 04:56:21.151443 | orchestrator | Tuesday 24 March 2026 04:55:34 +0000 (0:00:02.484) 0:06:15.043 ********* 2026-03-24 04:56:21.151454 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-24 04:56:21.151464 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-24 04:56:21.151475 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-24 04:56:21.151485 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-24 04:56:21.151495 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-24 04:56:21.151508 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-24 04:56:21.151519 | orchestrator | 2026-03-24 04:56:21.151530 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-24 04:56:21.151541 | orchestrator | Tuesday 24 March 2026 04:55:47 +0000 (0:00:13.253) 0:06:28.296 ********* 2026-03-24 04:56:21.151552 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:56:21.151563 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:56:21.151587 | orchestrator | 2026-03-24 04:56:21.151598 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-24 04:56:21.151608 | orchestrator | Tuesday 24 March 2026 04:55:51 +0000 (0:00:03.880) 0:06:32.178 ********* 2026-03-24 04:56:21.151619 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:56:21.151629 | orchestrator | 2026-03-24 04:56:21.151639 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 04:56:21.151649 | orchestrator | Tuesday 24 March 2026 04:55:53 +0000 (0:00:02.555) 0:06:34.733 ********* 2026-03-24 04:56:21.151659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-24 04:56:21.151669 | orchestrator | 2026-03-24 04:56:21.151679 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 04:56:21.151705 | orchestrator | Tuesday 24 March 2026 04:55:55 +0000 (0:00:01.457) 0:06:36.191 ********* 2026-03-24 04:56:21.151737 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-24 04:56:21.151748 | orchestrator | 2026-03-24 04:56:21.151758 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 04:56:21.151770 | orchestrator | Tuesday 24 March 2026 04:55:56 +0000 (0:00:01.528) 0:06:37.720 ********* 2026-03-24 04:56:21.151781 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.151792 | orchestrator | 2026-03-24 04:56:21.151804 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 04:56:21.151816 | orchestrator | Tuesday 24 March 2026 04:55:58 +0000 (0:00:01.557) 0:06:39.277 ********* 2026-03-24 04:56:21.151846 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.151867 | orchestrator | 2026-03-24 04:56:21.151878 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 04:56:21.151889 | orchestrator | Tuesday 24 March 2026 04:55:59 +0000 (0:00:01.142) 0:06:40.420 ********* 2026-03-24 04:56:21.151899 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.151909 | orchestrator | 2026-03-24 04:56:21.151920 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 04:56:21.151930 | orchestrator | Tuesday 24 March 2026 04:56:00 +0000 (0:00:01.138) 0:06:41.558 ********* 2026-03-24 04:56:21.151941 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.151952 | orchestrator | 2026-03-24 04:56:21.151963 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 04:56:21.151974 | orchestrator | Tuesday 24 March 2026 04:56:01 +0000 (0:00:01.100) 0:06:42.659 ********* 2026-03-24 04:56:21.151985 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.151996 | orchestrator | 2026-03-24 04:56:21.152006 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 04:56:21.152017 | orchestrator | Tuesday 24 March 2026 04:56:03 +0000 (0:00:01.562) 0:06:44.221 ********* 2026-03-24 04:56:21.152027 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152037 | orchestrator | 2026-03-24 04:56:21.152069 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 04:56:21.152080 | orchestrator | Tuesday 24 March 2026 04:56:04 +0000 (0:00:01.137) 0:06:45.359 ********* 2026-03-24 04:56:21.152090 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152101 | orchestrator | 2026-03-24 04:56:21.152112 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 04:56:21.152123 | orchestrator | Tuesday 24 March 2026 04:56:05 +0000 (0:00:01.130) 0:06:46.489 ********* 2026-03-24 04:56:21.152133 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.152143 | orchestrator | 2026-03-24 04:56:21.152153 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 04:56:21.152163 | orchestrator | Tuesday 24 March 2026 04:56:07 +0000 (0:00:01.528) 0:06:48.018 ********* 2026-03-24 04:56:21.152173 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.152182 | orchestrator | 2026-03-24 04:56:21.152192 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 04:56:21.152212 | orchestrator | Tuesday 24 March 2026 04:56:08 +0000 (0:00:01.640) 0:06:49.658 ********* 2026-03-24 04:56:21.152223 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152233 | orchestrator | 2026-03-24 04:56:21.152243 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 04:56:21.152254 | orchestrator | Tuesday 24 March 2026 04:56:09 +0000 (0:00:01.104) 0:06:50.763 ********* 2026-03-24 04:56:21.152264 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.152275 | orchestrator | 2026-03-24 04:56:21.152286 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 04:56:21.152297 | orchestrator | Tuesday 24 March 2026 04:56:10 +0000 (0:00:01.132) 0:06:51.895 ********* 2026-03-24 04:56:21.152308 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152319 | orchestrator | 2026-03-24 04:56:21.152330 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 04:56:21.152340 | orchestrator | Tuesday 24 March 2026 04:56:12 +0000 (0:00:01.123) 0:06:53.019 ********* 2026-03-24 04:56:21.152351 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152361 | orchestrator | 2026-03-24 04:56:21.152371 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 04:56:21.152382 | orchestrator | Tuesday 24 March 2026 04:56:13 +0000 (0:00:01.102) 0:06:54.122 ********* 2026-03-24 04:56:21.152393 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152404 | orchestrator | 2026-03-24 04:56:21.152415 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 04:56:21.152426 | orchestrator | Tuesday 24 March 2026 04:56:14 +0000 (0:00:01.121) 0:06:55.243 ********* 2026-03-24 04:56:21.152437 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152447 | orchestrator | 2026-03-24 04:56:21.152457 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 04:56:21.152468 | orchestrator | Tuesday 24 March 2026 04:56:15 +0000 (0:00:01.147) 0:06:56.391 ********* 2026-03-24 04:56:21.152479 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152490 | orchestrator | 2026-03-24 04:56:21.152500 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 04:56:21.152512 | orchestrator | Tuesday 24 March 2026 04:56:16 +0000 (0:00:01.120) 0:06:57.512 ********* 2026-03-24 04:56:21.152523 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.152534 | orchestrator | 2026-03-24 04:56:21.152546 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 04:56:21.152557 | orchestrator | Tuesday 24 March 2026 04:56:17 +0000 (0:00:01.126) 0:06:58.639 ********* 2026-03-24 04:56:21.152568 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.152579 | orchestrator | 2026-03-24 04:56:21.152589 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 04:56:21.152600 | orchestrator | Tuesday 24 March 2026 04:56:18 +0000 (0:00:01.134) 0:06:59.773 ********* 2026-03-24 04:56:21.152611 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:56:21.152621 | orchestrator | 2026-03-24 04:56:21.152632 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 04:56:21.152651 | orchestrator | Tuesday 24 March 2026 04:56:19 +0000 (0:00:01.126) 0:07:00.900 ********* 2026-03-24 04:56:21.152663 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:56:21.152673 | orchestrator | 2026-03-24 04:56:21.152684 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 04:56:21.152709 | orchestrator | Tuesday 24 March 2026 04:56:21 +0000 (0:00:01.137) 0:07:02.038 ********* 2026-03-24 04:57:10.661077 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661168 | orchestrator | 2026-03-24 04:57:10.661180 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 04:57:10.661188 | orchestrator | Tuesday 24 March 2026 04:56:22 +0000 (0:00:01.104) 0:07:03.143 ********* 2026-03-24 04:57:10.661196 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661202 | orchestrator | 2026-03-24 04:57:10.661209 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 04:57:10.661236 | orchestrator | Tuesday 24 March 2026 04:56:23 +0000 (0:00:01.092) 0:07:04.235 ********* 2026-03-24 04:57:10.661243 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661249 | orchestrator | 2026-03-24 04:57:10.661255 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 04:57:10.661262 | orchestrator | Tuesday 24 March 2026 04:56:24 +0000 (0:00:01.105) 0:07:05.341 ********* 2026-03-24 04:57:10.661268 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661274 | orchestrator | 2026-03-24 04:57:10.661280 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 04:57:10.661287 | orchestrator | Tuesday 24 March 2026 04:56:25 +0000 (0:00:01.160) 0:07:06.501 ********* 2026-03-24 04:57:10.661293 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661299 | orchestrator | 2026-03-24 04:57:10.661305 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 04:57:10.661312 | orchestrator | Tuesday 24 March 2026 04:56:26 +0000 (0:00:01.097) 0:07:07.599 ********* 2026-03-24 04:57:10.661318 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661324 | orchestrator | 2026-03-24 04:57:10.661334 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 04:57:10.661345 | orchestrator | Tuesday 24 March 2026 04:56:27 +0000 (0:00:01.093) 0:07:08.693 ********* 2026-03-24 04:57:10.661358 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661371 | orchestrator | 2026-03-24 04:57:10.661383 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 04:57:10.661393 | orchestrator | Tuesday 24 March 2026 04:56:28 +0000 (0:00:01.102) 0:07:09.796 ********* 2026-03-24 04:57:10.661403 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661413 | orchestrator | 2026-03-24 04:57:10.661423 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 04:57:10.661433 | orchestrator | Tuesday 24 March 2026 04:56:30 +0000 (0:00:01.167) 0:07:10.964 ********* 2026-03-24 04:57:10.661442 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661450 | orchestrator | 2026-03-24 04:57:10.661459 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 04:57:10.661468 | orchestrator | Tuesday 24 March 2026 04:56:31 +0000 (0:00:01.116) 0:07:12.080 ********* 2026-03-24 04:57:10.661477 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661487 | orchestrator | 2026-03-24 04:57:10.661497 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 04:57:10.661506 | orchestrator | Tuesday 24 March 2026 04:56:32 +0000 (0:00:01.092) 0:07:13.173 ********* 2026-03-24 04:57:10.661516 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661526 | orchestrator | 2026-03-24 04:57:10.661536 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 04:57:10.661545 | orchestrator | Tuesday 24 March 2026 04:56:33 +0000 (0:00:01.181) 0:07:14.355 ********* 2026-03-24 04:57:10.661554 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:57:10.661565 | orchestrator | 2026-03-24 04:57:10.661575 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 04:57:10.661585 | orchestrator | Tuesday 24 March 2026 04:56:35 +0000 (0:00:02.011) 0:07:16.367 ********* 2026-03-24 04:57:10.661595 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:57:10.661607 | orchestrator | 2026-03-24 04:57:10.661617 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 04:57:10.661628 | orchestrator | Tuesday 24 March 2026 04:56:37 +0000 (0:00:02.501) 0:07:18.869 ********* 2026-03-24 04:57:10.661639 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-24 04:57:10.661647 | orchestrator | 2026-03-24 04:57:10.661654 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 04:57:10.661661 | orchestrator | Tuesday 24 March 2026 04:56:39 +0000 (0:00:01.471) 0:07:20.340 ********* 2026-03-24 04:57:10.661668 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661676 | orchestrator | 2026-03-24 04:57:10.661683 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 04:57:10.661698 | orchestrator | Tuesday 24 March 2026 04:56:40 +0000 (0:00:01.120) 0:07:21.461 ********* 2026-03-24 04:57:10.661705 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661712 | orchestrator | 2026-03-24 04:57:10.661718 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 04:57:10.661725 | orchestrator | Tuesday 24 March 2026 04:56:41 +0000 (0:00:01.086) 0:07:22.547 ********* 2026-03-24 04:57:10.661733 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 04:57:10.661739 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 04:57:10.661745 | orchestrator | 2026-03-24 04:57:10.661752 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 04:57:10.661758 | orchestrator | Tuesday 24 March 2026 04:56:43 +0000 (0:00:01.808) 0:07:24.356 ********* 2026-03-24 04:57:10.661764 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:57:10.661770 | orchestrator | 2026-03-24 04:57:10.661777 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 04:57:10.661783 | orchestrator | Tuesday 24 March 2026 04:56:45 +0000 (0:00:01.645) 0:07:26.002 ********* 2026-03-24 04:57:10.661789 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661795 | orchestrator | 2026-03-24 04:57:10.661812 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 04:57:10.661818 | orchestrator | Tuesday 24 March 2026 04:56:46 +0000 (0:00:01.124) 0:07:27.126 ********* 2026-03-24 04:57:10.661825 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661831 | orchestrator | 2026-03-24 04:57:10.661851 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 04:57:10.661857 | orchestrator | Tuesday 24 March 2026 04:56:47 +0000 (0:00:01.116) 0:07:28.242 ********* 2026-03-24 04:57:10.661863 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661870 | orchestrator | 2026-03-24 04:57:10.661876 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 04:57:10.661882 | orchestrator | Tuesday 24 March 2026 04:56:48 +0000 (0:00:01.178) 0:07:29.421 ********* 2026-03-24 04:57:10.661888 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-24 04:57:10.661894 | orchestrator | 2026-03-24 04:57:10.661900 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 04:57:10.661906 | orchestrator | Tuesday 24 March 2026 04:56:49 +0000 (0:00:01.443) 0:07:30.864 ********* 2026-03-24 04:57:10.661912 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:57:10.661918 | orchestrator | 2026-03-24 04:57:10.661924 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 04:57:10.661930 | orchestrator | Tuesday 24 March 2026 04:56:51 +0000 (0:00:01.709) 0:07:32.574 ********* 2026-03-24 04:57:10.661937 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 04:57:10.661943 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 04:57:10.661949 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 04:57:10.661955 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661961 | orchestrator | 2026-03-24 04:57:10.661967 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 04:57:10.661973 | orchestrator | Tuesday 24 March 2026 04:56:52 +0000 (0:00:01.133) 0:07:33.708 ********* 2026-03-24 04:57:10.661979 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.661985 | orchestrator | 2026-03-24 04:57:10.661991 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 04:57:10.661997 | orchestrator | Tuesday 24 March 2026 04:56:53 +0000 (0:00:01.111) 0:07:34.819 ********* 2026-03-24 04:57:10.662003 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662009 | orchestrator | 2026-03-24 04:57:10.662137 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 04:57:10.662167 | orchestrator | Tuesday 24 March 2026 04:56:55 +0000 (0:00:01.209) 0:07:36.029 ********* 2026-03-24 04:57:10.662207 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662218 | orchestrator | 2026-03-24 04:57:10.662227 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 04:57:10.662237 | orchestrator | Tuesday 24 March 2026 04:56:56 +0000 (0:00:01.142) 0:07:37.172 ********* 2026-03-24 04:57:10.662247 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662258 | orchestrator | 2026-03-24 04:57:10.662270 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 04:57:10.662281 | orchestrator | Tuesday 24 March 2026 04:56:57 +0000 (0:00:01.109) 0:07:38.281 ********* 2026-03-24 04:57:10.662291 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662302 | orchestrator | 2026-03-24 04:57:10.662309 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 04:57:10.662316 | orchestrator | Tuesday 24 March 2026 04:56:58 +0000 (0:00:01.157) 0:07:39.439 ********* 2026-03-24 04:57:10.662322 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:57:10.662328 | orchestrator | 2026-03-24 04:57:10.662334 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 04:57:10.662341 | orchestrator | Tuesday 24 March 2026 04:57:01 +0000 (0:00:02.627) 0:07:42.067 ********* 2026-03-24 04:57:10.662347 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:57:10.662353 | orchestrator | 2026-03-24 04:57:10.662360 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 04:57:10.662366 | orchestrator | Tuesday 24 March 2026 04:57:02 +0000 (0:00:01.135) 0:07:43.203 ********* 2026-03-24 04:57:10.662372 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-24 04:57:10.662378 | orchestrator | 2026-03-24 04:57:10.662384 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 04:57:10.662390 | orchestrator | Tuesday 24 March 2026 04:57:03 +0000 (0:00:01.458) 0:07:44.661 ********* 2026-03-24 04:57:10.662396 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662402 | orchestrator | 2026-03-24 04:57:10.662409 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 04:57:10.662415 | orchestrator | Tuesday 24 March 2026 04:57:04 +0000 (0:00:01.125) 0:07:45.787 ********* 2026-03-24 04:57:10.662421 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662427 | orchestrator | 2026-03-24 04:57:10.662433 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 04:57:10.662439 | orchestrator | Tuesday 24 March 2026 04:57:06 +0000 (0:00:01.194) 0:07:46.982 ********* 2026-03-24 04:57:10.662445 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662451 | orchestrator | 2026-03-24 04:57:10.662457 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 04:57:10.662463 | orchestrator | Tuesday 24 March 2026 04:57:07 +0000 (0:00:01.126) 0:07:48.109 ********* 2026-03-24 04:57:10.662469 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662476 | orchestrator | 2026-03-24 04:57:10.662482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 04:57:10.662488 | orchestrator | Tuesday 24 March 2026 04:57:08 +0000 (0:00:01.155) 0:07:49.264 ********* 2026-03-24 04:57:10.662494 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662500 | orchestrator | 2026-03-24 04:57:10.662506 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 04:57:10.662519 | orchestrator | Tuesday 24 March 2026 04:57:09 +0000 (0:00:01.115) 0:07:50.380 ********* 2026-03-24 04:57:10.662525 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:10.662532 | orchestrator | 2026-03-24 04:57:10.662538 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 04:57:10.662552 | orchestrator | Tuesday 24 March 2026 04:57:10 +0000 (0:00:01.164) 0:07:51.544 ********* 2026-03-24 04:57:54.431895 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432097 | orchestrator | 2026-03-24 04:57:54.432148 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 04:57:54.432162 | orchestrator | Tuesday 24 March 2026 04:57:11 +0000 (0:00:01.140) 0:07:52.685 ********* 2026-03-24 04:57:54.432174 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432185 | orchestrator | 2026-03-24 04:57:54.432197 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 04:57:54.432209 | orchestrator | Tuesday 24 March 2026 04:57:12 +0000 (0:00:01.139) 0:07:53.824 ********* 2026-03-24 04:57:54.432221 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:57:54.432234 | orchestrator | 2026-03-24 04:57:54.432245 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 04:57:54.432257 | orchestrator | Tuesday 24 March 2026 04:57:14 +0000 (0:00:01.129) 0:07:54.954 ********* 2026-03-24 04:57:54.432268 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-24 04:57:54.432280 | orchestrator | 2026-03-24 04:57:54.432292 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 04:57:54.432302 | orchestrator | Tuesday 24 March 2026 04:57:15 +0000 (0:00:01.459) 0:07:56.414 ********* 2026-03-24 04:57:54.432314 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-24 04:57:54.432325 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-24 04:57:54.432336 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-24 04:57:54.432347 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-24 04:57:54.432359 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-24 04:57:54.432369 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-24 04:57:54.432381 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-24 04:57:54.432392 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-24 04:57:54.432404 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 04:57:54.432415 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 04:57:54.432426 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 04:57:54.432438 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 04:57:54.432449 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 04:57:54.432461 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 04:57:54.432472 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-24 04:57:54.432484 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-24 04:57:54.432495 | orchestrator | 2026-03-24 04:57:54.432507 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 04:57:54.432518 | orchestrator | Tuesday 24 March 2026 04:57:22 +0000 (0:00:06.731) 0:08:03.146 ********* 2026-03-24 04:57:54.432529 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432541 | orchestrator | 2026-03-24 04:57:54.432552 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 04:57:54.432564 | orchestrator | Tuesday 24 March 2026 04:57:23 +0000 (0:00:01.123) 0:08:04.269 ********* 2026-03-24 04:57:54.432576 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432588 | orchestrator | 2026-03-24 04:57:54.432600 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 04:57:54.432612 | orchestrator | Tuesday 24 March 2026 04:57:24 +0000 (0:00:01.115) 0:08:05.385 ********* 2026-03-24 04:57:54.432624 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432635 | orchestrator | 2026-03-24 04:57:54.432647 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 04:57:54.432659 | orchestrator | Tuesday 24 March 2026 04:57:25 +0000 (0:00:01.123) 0:08:06.508 ********* 2026-03-24 04:57:54.432671 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432683 | orchestrator | 2026-03-24 04:57:54.432696 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 04:57:54.432717 | orchestrator | Tuesday 24 March 2026 04:57:26 +0000 (0:00:01.105) 0:08:07.614 ********* 2026-03-24 04:57:54.432729 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432740 | orchestrator | 2026-03-24 04:57:54.432753 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 04:57:54.432766 | orchestrator | Tuesday 24 March 2026 04:57:27 +0000 (0:00:01.099) 0:08:08.713 ********* 2026-03-24 04:57:54.432777 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432789 | orchestrator | 2026-03-24 04:57:54.432801 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 04:57:54.432813 | orchestrator | Tuesday 24 March 2026 04:57:28 +0000 (0:00:01.101) 0:08:09.815 ********* 2026-03-24 04:57:54.432824 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432835 | orchestrator | 2026-03-24 04:57:54.432846 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 04:57:54.432858 | orchestrator | Tuesday 24 March 2026 04:57:30 +0000 (0:00:01.088) 0:08:10.903 ********* 2026-03-24 04:57:54.432869 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432879 | orchestrator | 2026-03-24 04:57:54.432891 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 04:57:54.432902 | orchestrator | Tuesday 24 March 2026 04:57:31 +0000 (0:00:01.091) 0:08:11.995 ********* 2026-03-24 04:57:54.432913 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432925 | orchestrator | 2026-03-24 04:57:54.432935 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 04:57:54.432962 | orchestrator | Tuesday 24 March 2026 04:57:32 +0000 (0:00:01.063) 0:08:13.058 ********* 2026-03-24 04:57:54.432974 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.432985 | orchestrator | 2026-03-24 04:57:54.432997 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 04:57:54.433050 | orchestrator | Tuesday 24 March 2026 04:57:33 +0000 (0:00:01.090) 0:08:14.149 ********* 2026-03-24 04:57:54.433062 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433073 | orchestrator | 2026-03-24 04:57:54.433084 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 04:57:54.433096 | orchestrator | Tuesday 24 March 2026 04:57:34 +0000 (0:00:01.081) 0:08:15.231 ********* 2026-03-24 04:57:54.433108 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433120 | orchestrator | 2026-03-24 04:57:54.433131 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 04:57:54.433142 | orchestrator | Tuesday 24 March 2026 04:57:35 +0000 (0:00:01.099) 0:08:16.330 ********* 2026-03-24 04:57:54.433153 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433165 | orchestrator | 2026-03-24 04:57:54.433178 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 04:57:54.433190 | orchestrator | Tuesday 24 March 2026 04:57:36 +0000 (0:00:01.177) 0:08:17.507 ********* 2026-03-24 04:57:54.433203 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433214 | orchestrator | 2026-03-24 04:57:54.433225 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 04:57:54.433236 | orchestrator | Tuesday 24 March 2026 04:57:37 +0000 (0:00:01.093) 0:08:18.601 ********* 2026-03-24 04:57:54.433247 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433257 | orchestrator | 2026-03-24 04:57:54.433269 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 04:57:54.433280 | orchestrator | Tuesday 24 March 2026 04:57:38 +0000 (0:00:01.216) 0:08:19.818 ********* 2026-03-24 04:57:54.433291 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433302 | orchestrator | 2026-03-24 04:57:54.433313 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 04:57:54.433324 | orchestrator | Tuesday 24 March 2026 04:57:40 +0000 (0:00:01.108) 0:08:20.927 ********* 2026-03-24 04:57:54.433335 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433384 | orchestrator | 2026-03-24 04:57:54.433398 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 04:57:54.433410 | orchestrator | Tuesday 24 March 2026 04:57:41 +0000 (0:00:01.092) 0:08:22.019 ********* 2026-03-24 04:57:54.433423 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433435 | orchestrator | 2026-03-24 04:57:54.433447 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 04:57:54.433458 | orchestrator | Tuesday 24 March 2026 04:57:42 +0000 (0:00:01.115) 0:08:23.135 ********* 2026-03-24 04:57:54.433470 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433482 | orchestrator | 2026-03-24 04:57:54.433494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 04:57:54.433506 | orchestrator | Tuesday 24 March 2026 04:57:43 +0000 (0:00:01.113) 0:08:24.249 ********* 2026-03-24 04:57:54.433517 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433528 | orchestrator | 2026-03-24 04:57:54.433541 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 04:57:54.433553 | orchestrator | Tuesday 24 March 2026 04:57:44 +0000 (0:00:01.168) 0:08:25.417 ********* 2026-03-24 04:57:54.433565 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433576 | orchestrator | 2026-03-24 04:57:54.433588 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 04:57:54.433604 | orchestrator | Tuesday 24 March 2026 04:57:45 +0000 (0:00:01.162) 0:08:26.579 ********* 2026-03-24 04:57:54.433615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 04:57:54.433628 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 04:57:54.433639 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 04:57:54.433651 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433663 | orchestrator | 2026-03-24 04:57:54.433676 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 04:57:54.433689 | orchestrator | Tuesday 24 March 2026 04:57:47 +0000 (0:00:01.706) 0:08:28.286 ********* 2026-03-24 04:57:54.433700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 04:57:54.433712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 04:57:54.433724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 04:57:54.433736 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433747 | orchestrator | 2026-03-24 04:57:54.433759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 04:57:54.433772 | orchestrator | Tuesday 24 March 2026 04:57:48 +0000 (0:00:01.403) 0:08:29.689 ********* 2026-03-24 04:57:54.433783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 04:57:54.433796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 04:57:54.433808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 04:57:54.433819 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433832 | orchestrator | 2026-03-24 04:57:54.433844 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 04:57:54.433857 | orchestrator | Tuesday 24 March 2026 04:57:50 +0000 (0:00:01.407) 0:08:31.096 ********* 2026-03-24 04:57:54.433868 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433880 | orchestrator | 2026-03-24 04:57:54.433892 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 04:57:54.433904 | orchestrator | Tuesday 24 March 2026 04:57:51 +0000 (0:00:01.126) 0:08:32.223 ********* 2026-03-24 04:57:54.433916 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-24 04:57:54.433927 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:57:54.433939 | orchestrator | 2026-03-24 04:57:54.433951 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 04:57:54.433972 | orchestrator | Tuesday 24 March 2026 04:57:52 +0000 (0:00:01.341) 0:08:33.565 ********* 2026-03-24 04:57:54.433985 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:57:54.434102 | orchestrator | 2026-03-24 04:57:54.434118 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-24 04:57:54.434144 | orchestrator | Tuesday 24 March 2026 04:57:54 +0000 (0:00:01.754) 0:08:35.319 ********* 2026-03-24 04:59:00.352544 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.352685 | orchestrator | 2026-03-24 04:59:00.352716 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-24 04:59:00.352735 | orchestrator | Tuesday 24 March 2026 04:57:55 +0000 (0:00:01.222) 0:08:36.542 ********* 2026-03-24 04:59:00.352753 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-24 04:59:00.352773 | orchestrator | 2026-03-24 04:59:00.352789 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-24 04:59:00.352805 | orchestrator | Tuesday 24 March 2026 04:57:57 +0000 (0:00:01.563) 0:08:38.106 ********* 2026-03-24 04:59:00.352823 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-24 04:59:00.352841 | orchestrator | 2026-03-24 04:59:00.352861 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-24 04:59:00.352881 | orchestrator | Tuesday 24 March 2026 04:58:00 +0000 (0:00:03.495) 0:08:41.601 ********* 2026-03-24 04:59:00.352899 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:00.352917 | orchestrator | 2026-03-24 04:59:00.352935 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-24 04:59:00.352955 | orchestrator | Tuesday 24 March 2026 04:58:01 +0000 (0:00:01.151) 0:08:42.752 ********* 2026-03-24 04:59:00.353005 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353026 | orchestrator | 2026-03-24 04:59:00.353045 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-24 04:59:00.353066 | orchestrator | Tuesday 24 March 2026 04:58:02 +0000 (0:00:01.119) 0:08:43.871 ********* 2026-03-24 04:59:00.353086 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353106 | orchestrator | 2026-03-24 04:59:00.353126 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-24 04:59:00.353148 | orchestrator | Tuesday 24 March 2026 04:58:04 +0000 (0:00:01.163) 0:08:45.035 ********* 2026-03-24 04:59:00.353170 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:59:00.353190 | orchestrator | 2026-03-24 04:59:00.353204 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-24 04:59:00.353217 | orchestrator | Tuesday 24 March 2026 04:58:06 +0000 (0:00:02.052) 0:08:47.088 ********* 2026-03-24 04:59:00.353230 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353241 | orchestrator | 2026-03-24 04:59:00.353252 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-24 04:59:00.353263 | orchestrator | Tuesday 24 March 2026 04:58:07 +0000 (0:00:01.586) 0:08:48.675 ********* 2026-03-24 04:59:00.353274 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353285 | orchestrator | 2026-03-24 04:59:00.353296 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-24 04:59:00.353307 | orchestrator | Tuesday 24 March 2026 04:58:09 +0000 (0:00:01.451) 0:08:50.127 ********* 2026-03-24 04:59:00.353318 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353328 | orchestrator | 2026-03-24 04:59:00.353339 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-24 04:59:00.353350 | orchestrator | Tuesday 24 March 2026 04:58:10 +0000 (0:00:01.481) 0:08:51.608 ********* 2026-03-24 04:59:00.353361 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353374 | orchestrator | 2026-03-24 04:59:00.353392 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-24 04:59:00.353409 | orchestrator | Tuesday 24 March 2026 04:58:12 +0000 (0:00:01.684) 0:08:53.293 ********* 2026-03-24 04:59:00.353424 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353440 | orchestrator | 2026-03-24 04:59:00.353456 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-24 04:59:00.353473 | orchestrator | Tuesday 24 March 2026 04:58:14 +0000 (0:00:01.687) 0:08:54.980 ********* 2026-03-24 04:59:00.353527 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-24 04:59:00.353545 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 04:59:00.353563 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 04:59:00.353578 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-24 04:59:00.353596 | orchestrator | 2026-03-24 04:59:00.353614 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-24 04:59:00.353631 | orchestrator | Tuesday 24 March 2026 04:58:17 +0000 (0:00:03.860) 0:08:58.841 ********* 2026-03-24 04:59:00.353648 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:59:00.353665 | orchestrator | 2026-03-24 04:59:00.353682 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-24 04:59:00.353701 | orchestrator | Tuesday 24 March 2026 04:58:19 +0000 (0:00:02.033) 0:09:00.874 ********* 2026-03-24 04:59:00.353720 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353739 | orchestrator | 2026-03-24 04:59:00.353751 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-24 04:59:00.353762 | orchestrator | Tuesday 24 March 2026 04:58:21 +0000 (0:00:01.123) 0:09:01.998 ********* 2026-03-24 04:59:00.353773 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353784 | orchestrator | 2026-03-24 04:59:00.353795 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-24 04:59:00.353805 | orchestrator | Tuesday 24 March 2026 04:58:22 +0000 (0:00:01.107) 0:09:03.105 ********* 2026-03-24 04:59:00.353816 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353827 | orchestrator | 2026-03-24 04:59:00.353838 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-24 04:59:00.353848 | orchestrator | Tuesday 24 March 2026 04:58:24 +0000 (0:00:02.034) 0:09:05.139 ********* 2026-03-24 04:59:00.353859 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.353869 | orchestrator | 2026-03-24 04:59:00.353880 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-24 04:59:00.353909 | orchestrator | Tuesday 24 March 2026 04:58:25 +0000 (0:00:01.471) 0:09:06.611 ********* 2026-03-24 04:59:00.353920 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:00.353931 | orchestrator | 2026-03-24 04:59:00.353942 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-24 04:59:00.353952 | orchestrator | Tuesday 24 March 2026 04:58:26 +0000 (0:00:01.086) 0:09:07.697 ********* 2026-03-24 04:59:00.354083 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-24 04:59:00.354100 | orchestrator | 2026-03-24 04:59:00.354111 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-24 04:59:00.354122 | orchestrator | Tuesday 24 March 2026 04:58:28 +0000 (0:00:01.480) 0:09:09.178 ********* 2026-03-24 04:59:00.354133 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:00.354150 | orchestrator | 2026-03-24 04:59:00.354168 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-24 04:59:00.354185 | orchestrator | Tuesday 24 March 2026 04:58:29 +0000 (0:00:01.108) 0:09:10.286 ********* 2026-03-24 04:59:00.354204 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:00.354223 | orchestrator | 2026-03-24 04:59:00.354259 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-24 04:59:00.354282 | orchestrator | Tuesday 24 March 2026 04:58:30 +0000 (0:00:01.135) 0:09:11.421 ********* 2026-03-24 04:59:00.354293 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-24 04:59:00.354304 | orchestrator | 2026-03-24 04:59:00.354315 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-24 04:59:00.354326 | orchestrator | Tuesday 24 March 2026 04:58:31 +0000 (0:00:01.449) 0:09:12.871 ********* 2026-03-24 04:59:00.354345 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.354364 | orchestrator | 2026-03-24 04:59:00.354393 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-24 04:59:00.354431 | orchestrator | Tuesday 24 March 2026 04:58:34 +0000 (0:00:02.307) 0:09:15.178 ********* 2026-03-24 04:59:00.354450 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.354469 | orchestrator | 2026-03-24 04:59:00.354486 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-24 04:59:00.354506 | orchestrator | Tuesday 24 March 2026 04:58:36 +0000 (0:00:01.989) 0:09:17.168 ********* 2026-03-24 04:59:00.354525 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.354544 | orchestrator | 2026-03-24 04:59:00.354565 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-24 04:59:00.354584 | orchestrator | Tuesday 24 March 2026 04:58:38 +0000 (0:00:02.440) 0:09:19.608 ********* 2026-03-24 04:59:00.354603 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:59:00.354617 | orchestrator | 2026-03-24 04:59:00.354628 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-24 04:59:00.354639 | orchestrator | Tuesday 24 March 2026 04:58:42 +0000 (0:00:03.497) 0:09:23.106 ********* 2026-03-24 04:59:00.354649 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-24 04:59:00.354660 | orchestrator | 2026-03-24 04:59:00.354671 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-24 04:59:00.354682 | orchestrator | Tuesday 24 March 2026 04:58:43 +0000 (0:00:01.558) 0:09:24.664 ********* 2026-03-24 04:59:00.354692 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.354703 | orchestrator | 2026-03-24 04:59:00.354714 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-24 04:59:00.354725 | orchestrator | Tuesday 24 March 2026 04:58:46 +0000 (0:00:02.275) 0:09:26.940 ********* 2026-03-24 04:59:00.354735 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:00.354746 | orchestrator | 2026-03-24 04:59:00.354757 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-24 04:59:00.354768 | orchestrator | Tuesday 24 March 2026 04:58:49 +0000 (0:00:03.053) 0:09:29.994 ********* 2026-03-24 04:59:00.354778 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:00.354789 | orchestrator | 2026-03-24 04:59:00.354800 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-24 04:59:00.354811 | orchestrator | Tuesday 24 March 2026 04:58:50 +0000 (0:00:01.105) 0:09:31.099 ********* 2026-03-24 04:59:00.354825 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-24 04:59:00.354840 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-24 04:59:00.354851 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-24 04:59:00.354872 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-24 04:59:00.354898 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-24 04:59:42.331196 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}])  2026-03-24 04:59:42.331290 | orchestrator | 2026-03-24 04:59:42.331302 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-24 04:59:42.331310 | orchestrator | Tuesday 24 March 2026 04:59:00 +0000 (0:00:10.139) 0:09:41.239 ********* 2026-03-24 04:59:42.331317 | orchestrator | changed: [testbed-node-0] 2026-03-24 04:59:42.331324 | orchestrator | 2026-03-24 04:59:42.331331 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 04:59:42.331337 | orchestrator | Tuesday 24 March 2026 04:59:02 +0000 (0:00:02.627) 0:09:43.866 ********* 2026-03-24 04:59:42.331344 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 04:59:42.331350 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 04:59:42.331357 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 04:59:42.331363 | orchestrator | 2026-03-24 04:59:42.331369 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 04:59:42.331376 | orchestrator | Tuesday 24 March 2026 04:59:05 +0000 (0:00:02.190) 0:09:46.057 ********* 2026-03-24 04:59:42.331382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 04:59:42.331389 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 04:59:42.331395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 04:59:42.331401 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331408 | orchestrator | 2026-03-24 04:59:42.331414 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-24 04:59:42.331420 | orchestrator | Tuesday 24 March 2026 04:59:06 +0000 (0:00:01.385) 0:09:47.443 ********* 2026-03-24 04:59:42.331426 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331432 | orchestrator | 2026-03-24 04:59:42.331439 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-24 04:59:42.331445 | orchestrator | Tuesday 24 March 2026 04:59:07 +0000 (0:00:01.119) 0:09:48.562 ********* 2026-03-24 04:59:42.331452 | orchestrator | ok: [testbed-node-0] 2026-03-24 04:59:42.331458 | orchestrator | 2026-03-24 04:59:42.331464 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 04:59:42.331470 | orchestrator | Tuesday 24 March 2026 04:59:09 +0000 (0:00:02.322) 0:09:50.885 ********* 2026-03-24 04:59:42.331477 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331483 | orchestrator | 2026-03-24 04:59:42.331489 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 04:59:42.331495 | orchestrator | Tuesday 24 March 2026 04:59:11 +0000 (0:00:01.118) 0:09:52.004 ********* 2026-03-24 04:59:42.331501 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331507 | orchestrator | 2026-03-24 04:59:42.331514 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 04:59:42.331520 | orchestrator | Tuesday 24 March 2026 04:59:12 +0000 (0:00:01.130) 0:09:53.134 ********* 2026-03-24 04:59:42.331526 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331532 | orchestrator | 2026-03-24 04:59:42.331538 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 04:59:42.331544 | orchestrator | Tuesday 24 March 2026 04:59:13 +0000 (0:00:01.115) 0:09:54.249 ********* 2026-03-24 04:59:42.331550 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331556 | orchestrator | 2026-03-24 04:59:42.331582 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 04:59:42.331589 | orchestrator | Tuesday 24 March 2026 04:59:14 +0000 (0:00:01.140) 0:09:55.389 ********* 2026-03-24 04:59:42.331595 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331602 | orchestrator | 2026-03-24 04:59:42.331608 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-24 04:59:42.331614 | orchestrator | Tuesday 24 March 2026 04:59:15 +0000 (0:00:01.167) 0:09:56.557 ********* 2026-03-24 04:59:42.331620 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331627 | orchestrator | 2026-03-24 04:59:42.331633 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 04:59:42.331639 | orchestrator | Tuesday 24 March 2026 04:59:16 +0000 (0:00:01.120) 0:09:57.677 ********* 2026-03-24 04:59:42.331645 | orchestrator | skipping: [testbed-node-0] 2026-03-24 04:59:42.331651 | orchestrator | 2026-03-24 04:59:42.331657 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-24 04:59:42.331663 | orchestrator | 2026-03-24 04:59:42.331670 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-24 04:59:42.331676 | orchestrator | Tuesday 24 March 2026 04:59:17 +0000 (0:00:01.036) 0:09:58.714 ********* 2026-03-24 04:59:42.331682 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331688 | orchestrator | 2026-03-24 04:59:42.331694 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-24 04:59:42.331711 | orchestrator | Tuesday 24 March 2026 04:59:18 +0000 (0:00:01.125) 0:09:59.840 ********* 2026-03-24 04:59:42.331717 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331724 | orchestrator | 2026-03-24 04:59:42.331730 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-24 04:59:42.331738 | orchestrator | Tuesday 24 March 2026 04:59:19 +0000 (0:00:00.797) 0:10:00.638 ********* 2026-03-24 04:59:42.331745 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:59:42.331752 | orchestrator | 2026-03-24 04:59:42.331759 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-24 04:59:42.331766 | orchestrator | Tuesday 24 March 2026 04:59:20 +0000 (0:00:00.784) 0:10:01.423 ********* 2026-03-24 04:59:42.331773 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331780 | orchestrator | 2026-03-24 04:59:42.331798 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 04:59:42.331806 | orchestrator | Tuesday 24 March 2026 04:59:21 +0000 (0:00:00.767) 0:10:02.190 ********* 2026-03-24 04:59:42.331813 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-24 04:59:42.331820 | orchestrator | 2026-03-24 04:59:42.331826 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 04:59:42.331833 | orchestrator | Tuesday 24 March 2026 04:59:22 +0000 (0:00:01.209) 0:10:03.400 ********* 2026-03-24 04:59:42.331840 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331847 | orchestrator | 2026-03-24 04:59:42.331854 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 04:59:42.331861 | orchestrator | Tuesday 24 March 2026 04:59:23 +0000 (0:00:01.459) 0:10:04.860 ********* 2026-03-24 04:59:42.331867 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331874 | orchestrator | 2026-03-24 04:59:42.331881 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 04:59:42.331888 | orchestrator | Tuesday 24 March 2026 04:59:24 +0000 (0:00:00.975) 0:10:05.835 ********* 2026-03-24 04:59:42.331895 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331901 | orchestrator | 2026-03-24 04:59:42.331908 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 04:59:42.331915 | orchestrator | Tuesday 24 March 2026 04:59:26 +0000 (0:00:01.409) 0:10:07.245 ********* 2026-03-24 04:59:42.331922 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331928 | orchestrator | 2026-03-24 04:59:42.331935 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 04:59:42.331942 | orchestrator | Tuesday 24 March 2026 04:59:27 +0000 (0:00:01.091) 0:10:08.337 ********* 2026-03-24 04:59:42.331976 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.331984 | orchestrator | 2026-03-24 04:59:42.331991 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 04:59:42.331998 | orchestrator | Tuesday 24 March 2026 04:59:28 +0000 (0:00:01.091) 0:10:09.428 ********* 2026-03-24 04:59:42.332005 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.332012 | orchestrator | 2026-03-24 04:59:42.332019 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 04:59:42.332026 | orchestrator | Tuesday 24 March 2026 04:59:29 +0000 (0:00:01.130) 0:10:10.559 ********* 2026-03-24 04:59:42.332033 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:59:42.332039 | orchestrator | 2026-03-24 04:59:42.332046 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 04:59:42.332053 | orchestrator | Tuesday 24 March 2026 04:59:30 +0000 (0:00:01.160) 0:10:11.719 ********* 2026-03-24 04:59:42.332061 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.332068 | orchestrator | 2026-03-24 04:59:42.332075 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 04:59:42.332082 | orchestrator | Tuesday 24 March 2026 04:59:32 +0000 (0:00:01.184) 0:10:12.904 ********* 2026-03-24 04:59:42.332089 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 04:59:42.332096 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 04:59:42.332102 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:59:42.332108 | orchestrator | 2026-03-24 04:59:42.332115 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 04:59:42.332121 | orchestrator | Tuesday 24 March 2026 04:59:33 +0000 (0:00:01.955) 0:10:14.860 ********* 2026-03-24 04:59:42.332127 | orchestrator | ok: [testbed-node-1] 2026-03-24 04:59:42.332133 | orchestrator | 2026-03-24 04:59:42.332139 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 04:59:42.332145 | orchestrator | Tuesday 24 March 2026 04:59:35 +0000 (0:00:01.245) 0:10:16.105 ********* 2026-03-24 04:59:42.332151 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 04:59:42.332157 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 04:59:42.332163 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 04:59:42.332169 | orchestrator | 2026-03-24 04:59:42.332175 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 04:59:42.332181 | orchestrator | Tuesday 24 March 2026 04:59:38 +0000 (0:00:03.156) 0:10:19.262 ********* 2026-03-24 04:59:42.332187 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 04:59:42.332193 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 04:59:42.332200 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 04:59:42.332206 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:59:42.332212 | orchestrator | 2026-03-24 04:59:42.332218 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 04:59:42.332224 | orchestrator | Tuesday 24 March 2026 04:59:40 +0000 (0:00:01.915) 0:10:21.178 ********* 2026-03-24 04:59:42.332232 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 04:59:42.332244 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 04:59:42.332251 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 04:59:42.332263 | orchestrator | skipping: [testbed-node-1] 2026-03-24 04:59:42.332270 | orchestrator | 2026-03-24 04:59:42.332280 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:00:03.003007 | orchestrator | Tuesday 24 March 2026 04:59:42 +0000 (0:00:02.030) 0:10:23.208 ********* 2026-03-24 05:00:03.003115 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:03.003131 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:03.003140 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:03.003149 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003158 | orchestrator | 2026-03-24 05:00:03.003168 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:00:03.003184 | orchestrator | Tuesday 24 March 2026 04:59:43 +0000 (0:00:01.178) 0:10:24.387 ********* 2026-03-24 05:00:03.003194 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 04:59:35.758742', 'end': '2026-03-24 04:59:35.810051', 'delta': '0:00:00.051309', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:00:03.003206 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '4f8b0ade79f3', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 04:59:36.599328', 'end': '2026-03-24 04:59:36.647992', 'delta': '0:00:00.048664', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4f8b0ade79f3'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:00:03.003228 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'cce21668b5d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 04:59:37.180847', 'end': '2026-03-24 04:59:37.234453', 'delta': '0:00:00.053606', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cce21668b5d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:00:03.003258 | orchestrator | 2026-03-24 05:00:03.003267 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:00:03.003275 | orchestrator | Tuesday 24 March 2026 04:59:44 +0000 (0:00:01.198) 0:10:25.585 ********* 2026-03-24 05:00:03.003283 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:00:03.003292 | orchestrator | 2026-03-24 05:00:03.003314 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:00:03.003323 | orchestrator | Tuesday 24 March 2026 04:59:45 +0000 (0:00:01.309) 0:10:26.895 ********* 2026-03-24 05:00:03.003331 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003338 | orchestrator | 2026-03-24 05:00:03.003347 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:00:03.003354 | orchestrator | Tuesday 24 March 2026 04:59:47 +0000 (0:00:01.277) 0:10:28.173 ********* 2026-03-24 05:00:03.003362 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:00:03.003370 | orchestrator | 2026-03-24 05:00:03.003378 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:00:03.003386 | orchestrator | Tuesday 24 March 2026 04:59:48 +0000 (0:00:01.146) 0:10:29.320 ********* 2026-03-24 05:00:03.003394 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:00:03.003402 | orchestrator | 2026-03-24 05:00:03.003410 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:00:03.003418 | orchestrator | Tuesday 24 March 2026 04:59:50 +0000 (0:00:01.911) 0:10:31.231 ********* 2026-03-24 05:00:03.003425 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:00:03.003433 | orchestrator | 2026-03-24 05:00:03.003441 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:00:03.003449 | orchestrator | Tuesday 24 March 2026 04:59:51 +0000 (0:00:01.166) 0:10:32.397 ********* 2026-03-24 05:00:03.003457 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003464 | orchestrator | 2026-03-24 05:00:03.003472 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:00:03.003480 | orchestrator | Tuesday 24 March 2026 04:59:52 +0000 (0:00:01.109) 0:10:33.507 ********* 2026-03-24 05:00:03.003490 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003500 | orchestrator | 2026-03-24 05:00:03.003510 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:00:03.003520 | orchestrator | Tuesday 24 March 2026 04:59:53 +0000 (0:00:01.224) 0:10:34.731 ********* 2026-03-24 05:00:03.003529 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003543 | orchestrator | 2026-03-24 05:00:03.003556 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:00:03.003570 | orchestrator | Tuesday 24 March 2026 04:59:54 +0000 (0:00:01.112) 0:10:35.843 ********* 2026-03-24 05:00:03.003583 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003596 | orchestrator | 2026-03-24 05:00:03.003611 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:00:03.003625 | orchestrator | Tuesday 24 March 2026 04:59:56 +0000 (0:00:01.182) 0:10:37.026 ********* 2026-03-24 05:00:03.003639 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003653 | orchestrator | 2026-03-24 05:00:03.003666 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:00:03.003680 | orchestrator | Tuesday 24 March 2026 04:59:57 +0000 (0:00:01.113) 0:10:38.140 ********* 2026-03-24 05:00:03.003689 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003697 | orchestrator | 2026-03-24 05:00:03.003705 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:00:03.003713 | orchestrator | Tuesday 24 March 2026 04:59:58 +0000 (0:00:01.137) 0:10:39.277 ********* 2026-03-24 05:00:03.003730 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003737 | orchestrator | 2026-03-24 05:00:03.003745 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:00:03.003753 | orchestrator | Tuesday 24 March 2026 04:59:59 +0000 (0:00:01.125) 0:10:40.403 ********* 2026-03-24 05:00:03.003761 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003769 | orchestrator | 2026-03-24 05:00:03.003777 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:00:03.003785 | orchestrator | Tuesday 24 March 2026 05:00:00 +0000 (0:00:01.130) 0:10:41.534 ********* 2026-03-24 05:00:03.003793 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:03.003801 | orchestrator | 2026-03-24 05:00:03.003809 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:00:03.003817 | orchestrator | Tuesday 24 March 2026 05:00:01 +0000 (0:00:01.112) 0:10:42.646 ********* 2026-03-24 05:00:03.003825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:03.003840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:03.003848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:03.003865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:00:04.225633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:04.225735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:04.225756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:04.225815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6bbbff7c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:00:04.225834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:04.225866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:00:04.225881 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:04.225896 | orchestrator | 2026-03-24 05:00:04.225911 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:00:04.225924 | orchestrator | Tuesday 24 March 2026 05:00:02 +0000 (0:00:01.243) 0:10:43.890 ********* 2026-03-24 05:00:04.225939 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:04.226006 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:04.226083 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:04.226099 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:04.226119 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:04.226143 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:20.949686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:20.949836 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6bbbff7c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:20.949853 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:20.949879 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:00:20.949890 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:20.949901 | orchestrator | 2026-03-24 05:00:20.949911 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:00:20.949922 | orchestrator | Tuesday 24 March 2026 05:00:04 +0000 (0:00:01.229) 0:10:45.120 ********* 2026-03-24 05:00:20.949931 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:00:20.950012 | orchestrator | 2026-03-24 05:00:20.950088 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:00:20.950103 | orchestrator | Tuesday 24 March 2026 05:00:05 +0000 (0:00:01.589) 0:10:46.709 ********* 2026-03-24 05:00:20.950118 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:00:20.950159 | orchestrator | 2026-03-24 05:00:20.950176 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:00:20.950191 | orchestrator | Tuesday 24 March 2026 05:00:06 +0000 (0:00:01.103) 0:10:47.813 ********* 2026-03-24 05:00:20.950207 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:00:20.950222 | orchestrator | 2026-03-24 05:00:20.950236 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:00:20.950246 | orchestrator | Tuesday 24 March 2026 05:00:08 +0000 (0:00:01.465) 0:10:49.278 ********* 2026-03-24 05:00:20.950257 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:20.950267 | orchestrator | 2026-03-24 05:00:20.950277 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:00:20.950288 | orchestrator | Tuesday 24 March 2026 05:00:09 +0000 (0:00:01.111) 0:10:50.390 ********* 2026-03-24 05:00:20.950297 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:20.950307 | orchestrator | 2026-03-24 05:00:20.950318 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:00:20.950328 | orchestrator | Tuesday 24 March 2026 05:00:10 +0000 (0:00:01.205) 0:10:51.596 ********* 2026-03-24 05:00:20.950338 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:20.950348 | orchestrator | 2026-03-24 05:00:20.950357 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:00:20.950367 | orchestrator | Tuesday 24 March 2026 05:00:11 +0000 (0:00:01.146) 0:10:52.743 ********* 2026-03-24 05:00:20.950378 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-24 05:00:20.950388 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:00:20.950399 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-24 05:00:20.950409 | orchestrator | 2026-03-24 05:00:20.950419 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:00:20.950429 | orchestrator | Tuesday 24 March 2026 05:00:13 +0000 (0:00:01.924) 0:10:54.668 ********* 2026-03-24 05:00:20.950439 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 05:00:20.950450 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 05:00:20.950459 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 05:00:20.950469 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:20.950479 | orchestrator | 2026-03-24 05:00:20.950490 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:00:20.950500 | orchestrator | Tuesday 24 March 2026 05:00:15 +0000 (0:00:01.249) 0:10:55.917 ********* 2026-03-24 05:00:20.950510 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:20.950520 | orchestrator | 2026-03-24 05:00:20.950530 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:00:20.950540 | orchestrator | Tuesday 24 March 2026 05:00:16 +0000 (0:00:01.139) 0:10:57.056 ********* 2026-03-24 05:00:20.950550 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:00:20.950561 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:00:20.950571 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:00:20.950580 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:00:20.950589 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:00:20.950597 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:00:20.950606 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:00:20.950615 | orchestrator | 2026-03-24 05:00:20.950631 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:00:20.950640 | orchestrator | Tuesday 24 March 2026 05:00:17 +0000 (0:00:01.772) 0:10:58.829 ********* 2026-03-24 05:00:20.950649 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:00:20.950658 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:00:20.950666 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:00:20.950675 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:00:20.950684 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:00:20.950692 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:00:20.950701 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:00:20.950709 | orchestrator | 2026-03-24 05:00:20.950718 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-24 05:00:20.950727 | orchestrator | Tuesday 24 March 2026 05:00:20 +0000 (0:00:02.144) 0:11:00.973 ********* 2026-03-24 05:00:20.950736 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:00:20.950745 | orchestrator | 2026-03-24 05:00:20.950753 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-24 05:00:20.950771 | orchestrator | Tuesday 24 March 2026 05:00:20 +0000 (0:00:00.864) 0:11:01.838 ********* 2026-03-24 05:01:00.693406 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.693569 | orchestrator | 2026-03-24 05:01:00.693599 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-24 05:01:00.693620 | orchestrator | Tuesday 24 March 2026 05:00:21 +0000 (0:00:00.889) 0:11:02.727 ********* 2026-03-24 05:01:00.693637 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.693655 | orchestrator | 2026-03-24 05:01:00.693673 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-24 05:01:00.693692 | orchestrator | Tuesday 24 March 2026 05:00:22 +0000 (0:00:00.769) 0:11:03.497 ********* 2026-03-24 05:01:00.693708 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.693725 | orchestrator | 2026-03-24 05:01:00.693794 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-24 05:01:00.693814 | orchestrator | Tuesday 24 March 2026 05:00:23 +0000 (0:00:00.887) 0:11:04.384 ********* 2026-03-24 05:01:00.693832 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.693849 | orchestrator | 2026-03-24 05:01:00.693866 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-24 05:01:00.693884 | orchestrator | Tuesday 24 March 2026 05:00:24 +0000 (0:00:00.773) 0:11:05.158 ********* 2026-03-24 05:01:00.693901 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 05:01:00.693920 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 05:01:00.693975 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 05:01:00.693994 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.694116 | orchestrator | 2026-03-24 05:01:00.694155 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-24 05:01:00.694176 | orchestrator | Tuesday 24 March 2026 05:00:25 +0000 (0:00:01.047) 0:11:06.205 ********* 2026-03-24 05:01:00.694196 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-24 05:01:00.694217 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-24 05:01:00.694238 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-24 05:01:00.694257 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-24 05:01:00.694278 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-24 05:01:00.694297 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-24 05:01:00.694352 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.694373 | orchestrator | 2026-03-24 05:01:00.694391 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-24 05:01:00.694412 | orchestrator | Tuesday 24 March 2026 05:00:26 +0000 (0:00:01.549) 0:11:07.755 ********* 2026-03-24 05:01:00.694433 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:01:00.694452 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:01:00.694471 | orchestrator | 2026-03-24 05:01:00.694488 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-24 05:01:00.694506 | orchestrator | Tuesday 24 March 2026 05:00:30 +0000 (0:00:03.191) 0:11:10.946 ********* 2026-03-24 05:01:00.694524 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:01:00.694542 | orchestrator | 2026-03-24 05:01:00.694559 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:01:00.694577 | orchestrator | Tuesday 24 March 2026 05:00:32 +0000 (0:00:02.217) 0:11:13.164 ********* 2026-03-24 05:01:00.694596 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-24 05:01:00.694614 | orchestrator | 2026-03-24 05:01:00.694632 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:01:00.694649 | orchestrator | Tuesday 24 March 2026 05:00:33 +0000 (0:00:01.294) 0:11:14.459 ********* 2026-03-24 05:01:00.694668 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-24 05:01:00.694687 | orchestrator | 2026-03-24 05:01:00.694706 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:01:00.694736 | orchestrator | Tuesday 24 March 2026 05:00:34 +0000 (0:00:01.124) 0:11:15.584 ********* 2026-03-24 05:01:00.694756 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.694775 | orchestrator | 2026-03-24 05:01:00.694794 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:01:00.694813 | orchestrator | Tuesday 24 March 2026 05:00:36 +0000 (0:00:01.568) 0:11:17.152 ********* 2026-03-24 05:01:00.694833 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.694854 | orchestrator | 2026-03-24 05:01:00.694874 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:01:00.694893 | orchestrator | Tuesday 24 March 2026 05:00:37 +0000 (0:00:01.129) 0:11:18.282 ********* 2026-03-24 05:01:00.694913 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.694962 | orchestrator | 2026-03-24 05:01:00.694981 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:01:00.695000 | orchestrator | Tuesday 24 March 2026 05:00:38 +0000 (0:00:01.139) 0:11:19.422 ********* 2026-03-24 05:01:00.695018 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695037 | orchestrator | 2026-03-24 05:01:00.695055 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:01:00.695073 | orchestrator | Tuesday 24 March 2026 05:00:39 +0000 (0:00:01.140) 0:11:20.562 ********* 2026-03-24 05:01:00.695085 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.695096 | orchestrator | 2026-03-24 05:01:00.695107 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:01:00.695118 | orchestrator | Tuesday 24 March 2026 05:00:41 +0000 (0:00:01.555) 0:11:22.117 ********* 2026-03-24 05:01:00.695128 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695139 | orchestrator | 2026-03-24 05:01:00.695150 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:01:00.695187 | orchestrator | Tuesday 24 March 2026 05:00:42 +0000 (0:00:01.102) 0:11:23.220 ********* 2026-03-24 05:01:00.695199 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695209 | orchestrator | 2026-03-24 05:01:00.695220 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:01:00.695231 | orchestrator | Tuesday 24 March 2026 05:00:43 +0000 (0:00:01.153) 0:11:24.374 ********* 2026-03-24 05:01:00.695242 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.695268 | orchestrator | 2026-03-24 05:01:00.695280 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:01:00.695291 | orchestrator | Tuesday 24 March 2026 05:00:45 +0000 (0:00:01.580) 0:11:25.955 ********* 2026-03-24 05:01:00.695302 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.695312 | orchestrator | 2026-03-24 05:01:00.695323 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:01:00.695334 | orchestrator | Tuesday 24 March 2026 05:00:46 +0000 (0:00:01.536) 0:11:27.491 ********* 2026-03-24 05:01:00.695345 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695356 | orchestrator | 2026-03-24 05:01:00.695366 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:01:00.695377 | orchestrator | Tuesday 24 March 2026 05:00:47 +0000 (0:00:00.856) 0:11:28.348 ********* 2026-03-24 05:01:00.695388 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.695399 | orchestrator | 2026-03-24 05:01:00.695410 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:01:00.695420 | orchestrator | Tuesday 24 March 2026 05:00:48 +0000 (0:00:00.790) 0:11:29.139 ********* 2026-03-24 05:01:00.695431 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695442 | orchestrator | 2026-03-24 05:01:00.695452 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:01:00.695463 | orchestrator | Tuesday 24 March 2026 05:00:48 +0000 (0:00:00.758) 0:11:29.897 ********* 2026-03-24 05:01:00.695474 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695485 | orchestrator | 2026-03-24 05:01:00.695495 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:01:00.695506 | orchestrator | Tuesday 24 March 2026 05:00:49 +0000 (0:00:00.827) 0:11:30.724 ********* 2026-03-24 05:01:00.695517 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695527 | orchestrator | 2026-03-24 05:01:00.695538 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:01:00.695549 | orchestrator | Tuesday 24 March 2026 05:00:50 +0000 (0:00:00.780) 0:11:31.505 ********* 2026-03-24 05:01:00.695560 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695570 | orchestrator | 2026-03-24 05:01:00.695581 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:01:00.695592 | orchestrator | Tuesday 24 March 2026 05:00:51 +0000 (0:00:00.764) 0:11:32.270 ********* 2026-03-24 05:01:00.695603 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695613 | orchestrator | 2026-03-24 05:01:00.695624 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:01:00.695635 | orchestrator | Tuesday 24 March 2026 05:00:52 +0000 (0:00:00.767) 0:11:33.038 ********* 2026-03-24 05:01:00.695646 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.695656 | orchestrator | 2026-03-24 05:01:00.695667 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:01:00.695678 | orchestrator | Tuesday 24 March 2026 05:00:52 +0000 (0:00:00.780) 0:11:33.818 ********* 2026-03-24 05:01:00.695689 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.695699 | orchestrator | 2026-03-24 05:01:00.695710 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:01:00.695721 | orchestrator | Tuesday 24 March 2026 05:00:53 +0000 (0:00:00.792) 0:11:34.610 ********* 2026-03-24 05:01:00.695732 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:00.695743 | orchestrator | 2026-03-24 05:01:00.695753 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:01:00.695764 | orchestrator | Tuesday 24 March 2026 05:00:54 +0000 (0:00:00.776) 0:11:35.387 ********* 2026-03-24 05:01:00.695775 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695786 | orchestrator | 2026-03-24 05:01:00.695797 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:01:00.695807 | orchestrator | Tuesday 24 March 2026 05:00:55 +0000 (0:00:00.797) 0:11:36.184 ********* 2026-03-24 05:01:00.695818 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695837 | orchestrator | 2026-03-24 05:01:00.695855 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:01:00.695866 | orchestrator | Tuesday 24 March 2026 05:00:56 +0000 (0:00:00.773) 0:11:36.958 ********* 2026-03-24 05:01:00.695877 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695888 | orchestrator | 2026-03-24 05:01:00.695898 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:01:00.695909 | orchestrator | Tuesday 24 March 2026 05:00:56 +0000 (0:00:00.779) 0:11:37.738 ********* 2026-03-24 05:01:00.695974 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.695988 | orchestrator | 2026-03-24 05:01:00.695999 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:01:00.696010 | orchestrator | Tuesday 24 March 2026 05:00:57 +0000 (0:00:00.754) 0:11:38.493 ********* 2026-03-24 05:01:00.696021 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.696031 | orchestrator | 2026-03-24 05:01:00.696042 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:01:00.696053 | orchestrator | Tuesday 24 March 2026 05:00:58 +0000 (0:00:00.787) 0:11:39.281 ********* 2026-03-24 05:01:00.696064 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.696075 | orchestrator | 2026-03-24 05:01:00.696086 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:01:00.696096 | orchestrator | Tuesday 24 March 2026 05:00:59 +0000 (0:00:00.768) 0:11:40.050 ********* 2026-03-24 05:01:00.696107 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.696118 | orchestrator | 2026-03-24 05:01:00.696129 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:01:00.696140 | orchestrator | Tuesday 24 March 2026 05:00:59 +0000 (0:00:00.758) 0:11:40.808 ********* 2026-03-24 05:01:00.696151 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:00.696162 | orchestrator | 2026-03-24 05:01:00.696180 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:01:48.425185 | orchestrator | Tuesday 24 March 2026 05:01:00 +0000 (0:00:00.774) 0:11:41.583 ********* 2026-03-24 05:01:48.425305 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.425324 | orchestrator | 2026-03-24 05:01:48.425337 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:01:48.425349 | orchestrator | Tuesday 24 March 2026 05:01:01 +0000 (0:00:00.767) 0:11:42.351 ********* 2026-03-24 05:01:48.425361 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.425373 | orchestrator | 2026-03-24 05:01:48.425384 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:01:48.425396 | orchestrator | Tuesday 24 March 2026 05:01:02 +0000 (0:00:00.778) 0:11:43.129 ********* 2026-03-24 05:01:48.425407 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.425418 | orchestrator | 2026-03-24 05:01:48.425429 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:01:48.425440 | orchestrator | Tuesday 24 March 2026 05:01:02 +0000 (0:00:00.759) 0:11:43.889 ********* 2026-03-24 05:01:48.425450 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.425461 | orchestrator | 2026-03-24 05:01:48.425472 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:01:48.425483 | orchestrator | Tuesday 24 March 2026 05:01:03 +0000 (0:00:00.816) 0:11:44.705 ********* 2026-03-24 05:01:48.425494 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:48.425506 | orchestrator | 2026-03-24 05:01:48.425518 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:01:48.425537 | orchestrator | Tuesday 24 March 2026 05:01:05 +0000 (0:00:01.610) 0:11:46.315 ********* 2026-03-24 05:01:48.425556 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:48.425575 | orchestrator | 2026-03-24 05:01:48.425600 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:01:48.425622 | orchestrator | Tuesday 24 March 2026 05:01:07 +0000 (0:00:02.096) 0:11:48.411 ********* 2026-03-24 05:01:48.425672 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-24 05:01:48.425691 | orchestrator | 2026-03-24 05:01:48.425709 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:01:48.425725 | orchestrator | Tuesday 24 March 2026 05:01:08 +0000 (0:00:01.229) 0:11:49.641 ********* 2026-03-24 05:01:48.425740 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.425758 | orchestrator | 2026-03-24 05:01:48.425775 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:01:48.425791 | orchestrator | Tuesday 24 March 2026 05:01:09 +0000 (0:00:01.154) 0:11:50.795 ********* 2026-03-24 05:01:48.425808 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.425824 | orchestrator | 2026-03-24 05:01:48.425842 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:01:48.425858 | orchestrator | Tuesday 24 March 2026 05:01:11 +0000 (0:00:01.128) 0:11:51.924 ********* 2026-03-24 05:01:48.425875 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:01:48.425893 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:01:48.425992 | orchestrator | 2026-03-24 05:01:48.426013 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:01:48.426097 | orchestrator | Tuesday 24 March 2026 05:01:12 +0000 (0:00:01.840) 0:11:53.764 ********* 2026-03-24 05:01:48.426107 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:48.426118 | orchestrator | 2026-03-24 05:01:48.426129 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:01:48.426140 | orchestrator | Tuesday 24 March 2026 05:01:14 +0000 (0:00:01.525) 0:11:55.290 ********* 2026-03-24 05:01:48.426155 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426173 | orchestrator | 2026-03-24 05:01:48.426189 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:01:48.426208 | orchestrator | Tuesday 24 March 2026 05:01:15 +0000 (0:00:01.119) 0:11:56.409 ********* 2026-03-24 05:01:48.426227 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426244 | orchestrator | 2026-03-24 05:01:48.426260 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:01:48.426296 | orchestrator | Tuesday 24 March 2026 05:01:16 +0000 (0:00:00.772) 0:11:57.182 ********* 2026-03-24 05:01:48.426314 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426329 | orchestrator | 2026-03-24 05:01:48.426362 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:01:48.426392 | orchestrator | Tuesday 24 March 2026 05:01:17 +0000 (0:00:00.784) 0:11:57.967 ********* 2026-03-24 05:01:48.426409 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-24 05:01:48.426426 | orchestrator | 2026-03-24 05:01:48.426444 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:01:48.426462 | orchestrator | Tuesday 24 March 2026 05:01:18 +0000 (0:00:01.175) 0:11:59.143 ********* 2026-03-24 05:01:48.426480 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:48.426498 | orchestrator | 2026-03-24 05:01:48.426516 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:01:48.426534 | orchestrator | Tuesday 24 March 2026 05:01:19 +0000 (0:00:01.729) 0:12:00.872 ********* 2026-03-24 05:01:48.426552 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:01:48.426571 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:01:48.426590 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:01:48.426610 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426629 | orchestrator | 2026-03-24 05:01:48.426647 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:01:48.426666 | orchestrator | Tuesday 24 March 2026 05:01:21 +0000 (0:00:01.165) 0:12:02.038 ********* 2026-03-24 05:01:48.426696 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426707 | orchestrator | 2026-03-24 05:01:48.426743 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:01:48.426754 | orchestrator | Tuesday 24 March 2026 05:01:22 +0000 (0:00:01.177) 0:12:03.215 ********* 2026-03-24 05:01:48.426765 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426776 | orchestrator | 2026-03-24 05:01:48.426787 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:01:48.426798 | orchestrator | Tuesday 24 March 2026 05:01:23 +0000 (0:00:01.187) 0:12:04.403 ********* 2026-03-24 05:01:48.426809 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426819 | orchestrator | 2026-03-24 05:01:48.426830 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:01:48.426841 | orchestrator | Tuesday 24 March 2026 05:01:24 +0000 (0:00:01.153) 0:12:05.557 ********* 2026-03-24 05:01:48.426851 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426862 | orchestrator | 2026-03-24 05:01:48.426873 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:01:48.426884 | orchestrator | Tuesday 24 March 2026 05:01:25 +0000 (0:00:01.236) 0:12:06.794 ********* 2026-03-24 05:01:48.426894 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.426936 | orchestrator | 2026-03-24 05:01:48.426952 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:01:48.426963 | orchestrator | Tuesday 24 March 2026 05:01:26 +0000 (0:00:00.790) 0:12:07.584 ********* 2026-03-24 05:01:48.426974 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:48.426990 | orchestrator | 2026-03-24 05:01:48.427008 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:01:48.427032 | orchestrator | Tuesday 24 March 2026 05:01:28 +0000 (0:00:02.230) 0:12:09.814 ********* 2026-03-24 05:01:48.427051 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:48.427069 | orchestrator | 2026-03-24 05:01:48.427087 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:01:48.427104 | orchestrator | Tuesday 24 March 2026 05:01:29 +0000 (0:00:00.803) 0:12:10.618 ********* 2026-03-24 05:01:48.427120 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-24 05:01:48.427137 | orchestrator | 2026-03-24 05:01:48.427155 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:01:48.427172 | orchestrator | Tuesday 24 March 2026 05:01:30 +0000 (0:00:01.091) 0:12:11.709 ********* 2026-03-24 05:01:48.427190 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427209 | orchestrator | 2026-03-24 05:01:48.427228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:01:48.427247 | orchestrator | Tuesday 24 March 2026 05:01:31 +0000 (0:00:01.130) 0:12:12.840 ********* 2026-03-24 05:01:48.427266 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427281 | orchestrator | 2026-03-24 05:01:48.427292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:01:48.427302 | orchestrator | Tuesday 24 March 2026 05:01:33 +0000 (0:00:01.116) 0:12:13.956 ********* 2026-03-24 05:01:48.427313 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427324 | orchestrator | 2026-03-24 05:01:48.427335 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:01:48.427352 | orchestrator | Tuesday 24 March 2026 05:01:34 +0000 (0:00:01.134) 0:12:15.091 ********* 2026-03-24 05:01:48.427371 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427390 | orchestrator | 2026-03-24 05:01:48.427408 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:01:48.427428 | orchestrator | Tuesday 24 March 2026 05:01:35 +0000 (0:00:01.154) 0:12:16.245 ********* 2026-03-24 05:01:48.427447 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427464 | orchestrator | 2026-03-24 05:01:48.427483 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:01:48.427500 | orchestrator | Tuesday 24 March 2026 05:01:36 +0000 (0:00:01.142) 0:12:17.388 ********* 2026-03-24 05:01:48.427533 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427551 | orchestrator | 2026-03-24 05:01:48.427567 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:01:48.427583 | orchestrator | Tuesday 24 March 2026 05:01:37 +0000 (0:00:01.125) 0:12:18.513 ********* 2026-03-24 05:01:48.427601 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427618 | orchestrator | 2026-03-24 05:01:48.427648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:01:48.427663 | orchestrator | Tuesday 24 March 2026 05:01:38 +0000 (0:00:01.123) 0:12:19.637 ********* 2026-03-24 05:01:48.427673 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:01:48.427684 | orchestrator | 2026-03-24 05:01:48.427695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:01:48.427706 | orchestrator | Tuesday 24 March 2026 05:01:39 +0000 (0:00:01.147) 0:12:20.784 ********* 2026-03-24 05:01:48.427723 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:01:48.427741 | orchestrator | 2026-03-24 05:01:48.427759 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:01:48.427778 | orchestrator | Tuesday 24 March 2026 05:01:40 +0000 (0:00:00.806) 0:12:21.591 ********* 2026-03-24 05:01:48.427798 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-24 05:01:48.427815 | orchestrator | 2026-03-24 05:01:48.427834 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:01:48.427854 | orchestrator | Tuesday 24 March 2026 05:01:41 +0000 (0:00:01.098) 0:12:22.690 ********* 2026-03-24 05:01:48.427872 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-24 05:01:48.427891 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-24 05:01:48.427930 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-24 05:01:48.427941 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-24 05:01:48.427952 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-24 05:01:48.427963 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-24 05:01:48.427973 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-24 05:01:48.427999 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:02:22.373641 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:02:22.373778 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:02:22.373803 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:02:22.373819 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:02:22.373835 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:02:22.373847 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:02:22.373857 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-24 05:02:22.373867 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-24 05:02:22.373876 | orchestrator | 2026-03-24 05:02:22.373886 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:02:22.373966 | orchestrator | Tuesday 24 March 2026 05:01:48 +0000 (0:00:06.613) 0:12:29.303 ********* 2026-03-24 05:02:22.373979 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.373988 | orchestrator | 2026-03-24 05:02:22.373997 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:02:22.374006 | orchestrator | Tuesday 24 March 2026 05:01:49 +0000 (0:00:00.754) 0:12:30.058 ********* 2026-03-24 05:02:22.374069 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374079 | orchestrator | 2026-03-24 05:02:22.374089 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:02:22.374098 | orchestrator | Tuesday 24 March 2026 05:01:49 +0000 (0:00:00.818) 0:12:30.877 ********* 2026-03-24 05:02:22.374131 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374140 | orchestrator | 2026-03-24 05:02:22.374150 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:02:22.374161 | orchestrator | Tuesday 24 March 2026 05:01:50 +0000 (0:00:00.807) 0:12:31.684 ********* 2026-03-24 05:02:22.374172 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374181 | orchestrator | 2026-03-24 05:02:22.374191 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:02:22.374201 | orchestrator | Tuesday 24 March 2026 05:01:51 +0000 (0:00:00.767) 0:12:32.452 ********* 2026-03-24 05:02:22.374211 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374221 | orchestrator | 2026-03-24 05:02:22.374230 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:02:22.374240 | orchestrator | Tuesday 24 March 2026 05:01:52 +0000 (0:00:00.785) 0:12:33.238 ********* 2026-03-24 05:02:22.374250 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374259 | orchestrator | 2026-03-24 05:02:22.374269 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:02:22.374281 | orchestrator | Tuesday 24 March 2026 05:01:53 +0000 (0:00:00.867) 0:12:34.105 ********* 2026-03-24 05:02:22.374290 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374301 | orchestrator | 2026-03-24 05:02:22.374311 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:02:22.374321 | orchestrator | Tuesday 24 March 2026 05:01:53 +0000 (0:00:00.786) 0:12:34.891 ********* 2026-03-24 05:02:22.374331 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374344 | orchestrator | 2026-03-24 05:02:22.374364 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:02:22.374386 | orchestrator | Tuesday 24 March 2026 05:01:54 +0000 (0:00:00.788) 0:12:35.680 ********* 2026-03-24 05:02:22.374401 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374416 | orchestrator | 2026-03-24 05:02:22.374432 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:02:22.374447 | orchestrator | Tuesday 24 March 2026 05:01:55 +0000 (0:00:00.822) 0:12:36.502 ********* 2026-03-24 05:02:22.374462 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374476 | orchestrator | 2026-03-24 05:02:22.374492 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:02:22.374524 | orchestrator | Tuesday 24 March 2026 05:01:56 +0000 (0:00:00.763) 0:12:37.266 ********* 2026-03-24 05:02:22.374541 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374555 | orchestrator | 2026-03-24 05:02:22.374564 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:02:22.374573 | orchestrator | Tuesday 24 March 2026 05:01:57 +0000 (0:00:00.763) 0:12:38.030 ********* 2026-03-24 05:02:22.374582 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374590 | orchestrator | 2026-03-24 05:02:22.374599 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:02:22.374608 | orchestrator | Tuesday 24 March 2026 05:01:57 +0000 (0:00:00.804) 0:12:38.835 ********* 2026-03-24 05:02:22.374616 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374626 | orchestrator | 2026-03-24 05:02:22.374640 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:02:22.374661 | orchestrator | Tuesday 24 March 2026 05:01:58 +0000 (0:00:00.873) 0:12:39.709 ********* 2026-03-24 05:02:22.374678 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374692 | orchestrator | 2026-03-24 05:02:22.374706 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:02:22.374719 | orchestrator | Tuesday 24 March 2026 05:01:59 +0000 (0:00:00.776) 0:12:40.485 ********* 2026-03-24 05:02:22.374732 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374746 | orchestrator | 2026-03-24 05:02:22.374760 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:02:22.374789 | orchestrator | Tuesday 24 March 2026 05:02:00 +0000 (0:00:00.894) 0:12:41.380 ********* 2026-03-24 05:02:22.374799 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374808 | orchestrator | 2026-03-24 05:02:22.374817 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:02:22.374826 | orchestrator | Tuesday 24 March 2026 05:02:01 +0000 (0:00:00.783) 0:12:42.163 ********* 2026-03-24 05:02:22.374834 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374843 | orchestrator | 2026-03-24 05:02:22.374872 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:02:22.374883 | orchestrator | Tuesday 24 March 2026 05:02:02 +0000 (0:00:00.759) 0:12:42.923 ********* 2026-03-24 05:02:22.374892 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374931 | orchestrator | 2026-03-24 05:02:22.374940 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:02:22.374949 | orchestrator | Tuesday 24 March 2026 05:02:02 +0000 (0:00:00.769) 0:12:43.692 ********* 2026-03-24 05:02:22.374957 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.374966 | orchestrator | 2026-03-24 05:02:22.374975 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:02:22.374983 | orchestrator | Tuesday 24 March 2026 05:02:03 +0000 (0:00:00.825) 0:12:44.517 ********* 2026-03-24 05:02:22.374992 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375001 | orchestrator | 2026-03-24 05:02:22.375009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:02:22.375020 | orchestrator | Tuesday 24 March 2026 05:02:04 +0000 (0:00:00.784) 0:12:45.302 ********* 2026-03-24 05:02:22.375038 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375060 | orchestrator | 2026-03-24 05:02:22.375074 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:02:22.375087 | orchestrator | Tuesday 24 March 2026 05:02:05 +0000 (0:00:00.820) 0:12:46.122 ********* 2026-03-24 05:02:22.375100 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:02:22.375112 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:02:22.375127 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:02:22.375142 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375157 | orchestrator | 2026-03-24 05:02:22.375172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:02:22.375183 | orchestrator | Tuesday 24 March 2026 05:02:06 +0000 (0:00:01.039) 0:12:47.162 ********* 2026-03-24 05:02:22.375192 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:02:22.375200 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:02:22.375209 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:02:22.375218 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375226 | orchestrator | 2026-03-24 05:02:22.375235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:02:22.375244 | orchestrator | Tuesday 24 March 2026 05:02:07 +0000 (0:00:01.063) 0:12:48.225 ********* 2026-03-24 05:02:22.375252 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:02:22.375261 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:02:22.375269 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:02:22.375278 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375286 | orchestrator | 2026-03-24 05:02:22.375295 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:02:22.375304 | orchestrator | Tuesday 24 March 2026 05:02:08 +0000 (0:00:01.113) 0:12:49.339 ********* 2026-03-24 05:02:22.375312 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375321 | orchestrator | 2026-03-24 05:02:22.375329 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:02:22.375338 | orchestrator | Tuesday 24 March 2026 05:02:09 +0000 (0:00:00.775) 0:12:50.114 ********* 2026-03-24 05:02:22.375356 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-24 05:02:22.375365 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375374 | orchestrator | 2026-03-24 05:02:22.375382 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:02:22.375391 | orchestrator | Tuesday 24 March 2026 05:02:10 +0000 (0:00:00.910) 0:12:51.025 ********* 2026-03-24 05:02:22.375400 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:02:22.375408 | orchestrator | 2026-03-24 05:02:22.375417 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:02:22.375432 | orchestrator | Tuesday 24 March 2026 05:02:11 +0000 (0:00:01.439) 0:12:52.465 ********* 2026-03-24 05:02:22.375442 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:02:22.375451 | orchestrator | 2026-03-24 05:02:22.375460 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-24 05:02:22.375468 | orchestrator | Tuesday 24 March 2026 05:02:12 +0000 (0:00:00.791) 0:12:53.257 ********* 2026-03-24 05:02:22.375477 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-03-24 05:02:22.375486 | orchestrator | 2026-03-24 05:02:22.375495 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-24 05:02:22.375503 | orchestrator | Tuesday 24 March 2026 05:02:13 +0000 (0:00:01.210) 0:12:54.468 ********* 2026-03-24 05:02:22.375512 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:02:22.375521 | orchestrator | 2026-03-24 05:02:22.375529 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-24 05:02:22.375541 | orchestrator | Tuesday 24 March 2026 05:02:16 +0000 (0:00:03.216) 0:12:57.684 ********* 2026-03-24 05:02:22.375555 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:02:22.375577 | orchestrator | 2026-03-24 05:02:22.375595 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-24 05:02:22.375609 | orchestrator | Tuesday 24 March 2026 05:02:17 +0000 (0:00:01.161) 0:12:58.846 ********* 2026-03-24 05:02:22.375622 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:02:22.375636 | orchestrator | 2026-03-24 05:02:22.375648 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-24 05:02:22.375662 | orchestrator | Tuesday 24 March 2026 05:02:19 +0000 (0:00:01.176) 0:13:00.022 ********* 2026-03-24 05:02:22.375676 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:02:22.375687 | orchestrator | 2026-03-24 05:02:22.375702 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-24 05:02:22.375716 | orchestrator | Tuesday 24 March 2026 05:02:20 +0000 (0:00:01.138) 0:13:01.161 ********* 2026-03-24 05:02:22.375744 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:03:38.077065 | orchestrator | 2026-03-24 05:03:38.077191 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-24 05:03:38.077214 | orchestrator | Tuesday 24 March 2026 05:02:22 +0000 (0:00:02.098) 0:13:03.259 ********* 2026-03-24 05:03:38.077229 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.077245 | orchestrator | 2026-03-24 05:03:38.077259 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-24 05:03:38.077274 | orchestrator | Tuesday 24 March 2026 05:02:23 +0000 (0:00:01.557) 0:13:04.817 ********* 2026-03-24 05:03:38.077288 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.077302 | orchestrator | 2026-03-24 05:03:38.077317 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-24 05:03:38.077331 | orchestrator | Tuesday 24 March 2026 05:02:25 +0000 (0:00:01.485) 0:13:06.303 ********* 2026-03-24 05:03:38.077344 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.077358 | orchestrator | 2026-03-24 05:03:38.077373 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-24 05:03:38.077388 | orchestrator | Tuesday 24 March 2026 05:02:26 +0000 (0:00:01.490) 0:13:07.793 ********* 2026-03-24 05:03:38.077402 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:03:38.077445 | orchestrator | 2026-03-24 05:03:38.077458 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-24 05:03:38.077472 | orchestrator | Tuesday 24 March 2026 05:02:28 +0000 (0:00:01.611) 0:13:09.404 ********* 2026-03-24 05:03:38.077483 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:03:38.077497 | orchestrator | 2026-03-24 05:03:38.077512 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-24 05:03:38.077526 | orchestrator | Tuesday 24 March 2026 05:02:30 +0000 (0:00:01.528) 0:13:10.933 ********* 2026-03-24 05:03:38.077541 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:03:38.077555 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-24 05:03:38.077569 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 05:03:38.077582 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-24 05:03:38.077595 | orchestrator | 2026-03-24 05:03:38.077608 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-24 05:03:38.077621 | orchestrator | Tuesday 24 March 2026 05:02:34 +0000 (0:00:04.153) 0:13:15.086 ********* 2026-03-24 05:03:38.077634 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:03:38.077645 | orchestrator | 2026-03-24 05:03:38.077653 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-24 05:03:38.077661 | orchestrator | Tuesday 24 March 2026 05:02:36 +0000 (0:00:02.059) 0:13:17.146 ********* 2026-03-24 05:03:38.077669 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.077678 | orchestrator | 2026-03-24 05:03:38.077686 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-24 05:03:38.077694 | orchestrator | Tuesday 24 March 2026 05:02:37 +0000 (0:00:01.134) 0:13:18.280 ********* 2026-03-24 05:03:38.077702 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.077710 | orchestrator | 2026-03-24 05:03:38.077718 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-24 05:03:38.077726 | orchestrator | Tuesday 24 March 2026 05:02:38 +0000 (0:00:01.110) 0:13:19.391 ********* 2026-03-24 05:03:38.077734 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.077742 | orchestrator | 2026-03-24 05:03:38.077749 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-24 05:03:38.077757 | orchestrator | Tuesday 24 March 2026 05:02:40 +0000 (0:00:01.684) 0:13:21.075 ********* 2026-03-24 05:03:38.077765 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.077773 | orchestrator | 2026-03-24 05:03:38.077781 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-24 05:03:38.077789 | orchestrator | Tuesday 24 March 2026 05:02:41 +0000 (0:00:01.472) 0:13:22.548 ********* 2026-03-24 05:03:38.077796 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:03:38.077804 | orchestrator | 2026-03-24 05:03:38.077812 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-24 05:03:38.077835 | orchestrator | Tuesday 24 March 2026 05:02:42 +0000 (0:00:00.754) 0:13:23.303 ********* 2026-03-24 05:03:38.077843 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-03-24 05:03:38.077857 | orchestrator | 2026-03-24 05:03:38.077908 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-24 05:03:38.077924 | orchestrator | Tuesday 24 March 2026 05:02:43 +0000 (0:00:01.118) 0:13:24.421 ********* 2026-03-24 05:03:38.077938 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:03:38.077952 | orchestrator | 2026-03-24 05:03:38.077965 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-24 05:03:38.077979 | orchestrator | Tuesday 24 March 2026 05:02:44 +0000 (0:00:01.126) 0:13:25.548 ********* 2026-03-24 05:03:38.077991 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:03:38.078005 | orchestrator | 2026-03-24 05:03:38.078085 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-24 05:03:38.078096 | orchestrator | Tuesday 24 March 2026 05:02:45 +0000 (0:00:01.120) 0:13:26.668 ********* 2026-03-24 05:03:38.078115 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-03-24 05:03:38.078123 | orchestrator | 2026-03-24 05:03:38.078131 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-24 05:03:38.078139 | orchestrator | Tuesday 24 March 2026 05:02:46 +0000 (0:00:01.125) 0:13:27.794 ********* 2026-03-24 05:03:38.078147 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:03:38.078155 | orchestrator | 2026-03-24 05:03:38.078163 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-24 05:03:38.078171 | orchestrator | Tuesday 24 March 2026 05:02:49 +0000 (0:00:02.590) 0:13:30.384 ********* 2026-03-24 05:03:38.078179 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.078187 | orchestrator | 2026-03-24 05:03:38.078195 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-24 05:03:38.078203 | orchestrator | Tuesday 24 March 2026 05:02:51 +0000 (0:00:02.024) 0:13:32.408 ********* 2026-03-24 05:03:38.078231 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.078239 | orchestrator | 2026-03-24 05:03:38.078247 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-24 05:03:38.078255 | orchestrator | Tuesday 24 March 2026 05:02:53 +0000 (0:00:02.447) 0:13:34.855 ********* 2026-03-24 05:03:38.078263 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:03:38.078271 | orchestrator | 2026-03-24 05:03:38.078279 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-24 05:03:38.078287 | orchestrator | Tuesday 24 March 2026 05:02:56 +0000 (0:00:02.952) 0:13:37.808 ********* 2026-03-24 05:03:38.078294 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-03-24 05:03:38.078303 | orchestrator | 2026-03-24 05:03:38.078311 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-24 05:03:38.078318 | orchestrator | Tuesday 24 March 2026 05:02:58 +0000 (0:00:01.121) 0:13:38.929 ********* 2026-03-24 05:03:38.078326 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-24 05:03:38.078335 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.078343 | orchestrator | 2026-03-24 05:03:38.078351 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-24 05:03:38.078359 | orchestrator | Tuesday 24 March 2026 05:03:21 +0000 (0:00:23.013) 0:14:01.942 ********* 2026-03-24 05:03:38.078379 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:03:38.078388 | orchestrator | 2026-03-24 05:03:38.078396 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-24 05:03:38.078412 | orchestrator | Tuesday 24 March 2026 05:03:23 +0000 (0:00:02.667) 0:14:04.610 ********* 2026-03-24 05:03:38.078421 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:03:38.078429 | orchestrator | 2026-03-24 05:03:38.078437 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-24 05:03:38.078444 | orchestrator | Tuesday 24 March 2026 05:03:24 +0000 (0:00:00.769) 0:14:05.380 ********* 2026-03-24 05:03:38.078454 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-24 05:03:38.078465 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-24 05:03:38.078474 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-24 05:03:38.078494 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-24 05:03:38.078504 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-24 05:03:38.078513 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}])  2026-03-24 05:03:38.078523 | orchestrator | 2026-03-24 05:03:38.078532 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-24 05:03:38.078540 | orchestrator | Tuesday 24 March 2026 05:03:34 +0000 (0:00:09.715) 0:14:15.095 ********* 2026-03-24 05:03:38.078548 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:03:38.078555 | orchestrator | 2026-03-24 05:03:38.078563 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:03:38.078571 | orchestrator | Tuesday 24 March 2026 05:03:36 +0000 (0:00:02.095) 0:14:17.190 ********* 2026-03-24 05:03:38.078585 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:04:12.443312 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-24 05:04:12.443427 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-24 05:04:12.443446 | orchestrator | 2026-03-24 05:04:12.443458 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:04:12.443471 | orchestrator | Tuesday 24 March 2026 05:03:38 +0000 (0:00:01.779) 0:14:18.970 ********* 2026-03-24 05:04:12.443482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 05:04:12.443494 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 05:04:12.443505 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 05:04:12.443516 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443527 | orchestrator | 2026-03-24 05:04:12.443538 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-24 05:04:12.443550 | orchestrator | Tuesday 24 March 2026 05:03:39 +0000 (0:00:01.030) 0:14:20.000 ********* 2026-03-24 05:04:12.443561 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443568 | orchestrator | 2026-03-24 05:04:12.443575 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-24 05:04:12.443581 | orchestrator | Tuesday 24 March 2026 05:03:39 +0000 (0:00:00.754) 0:14:20.755 ********* 2026-03-24 05:04:12.443588 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:04:12.443595 | orchestrator | 2026-03-24 05:04:12.443602 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 05:04:12.443608 | orchestrator | Tuesday 24 March 2026 05:03:42 +0000 (0:00:02.297) 0:14:23.053 ********* 2026-03-24 05:04:12.443615 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443621 | orchestrator | 2026-03-24 05:04:12.443627 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 05:04:12.443634 | orchestrator | Tuesday 24 March 2026 05:03:42 +0000 (0:00:00.761) 0:14:23.814 ********* 2026-03-24 05:04:12.443662 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443669 | orchestrator | 2026-03-24 05:04:12.443675 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 05:04:12.443681 | orchestrator | Tuesday 24 March 2026 05:03:43 +0000 (0:00:00.804) 0:14:24.618 ********* 2026-03-24 05:04:12.443687 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443694 | orchestrator | 2026-03-24 05:04:12.443700 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 05:04:12.443706 | orchestrator | Tuesday 24 March 2026 05:03:44 +0000 (0:00:00.747) 0:14:25.366 ********* 2026-03-24 05:04:12.443712 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443718 | orchestrator | 2026-03-24 05:04:12.443737 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 05:04:12.443743 | orchestrator | Tuesday 24 March 2026 05:03:45 +0000 (0:00:00.759) 0:14:26.126 ********* 2026-03-24 05:04:12.443750 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443756 | orchestrator | 2026-03-24 05:04:12.443762 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-24 05:04:12.443768 | orchestrator | Tuesday 24 March 2026 05:03:45 +0000 (0:00:00.760) 0:14:26.886 ********* 2026-03-24 05:04:12.443774 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443781 | orchestrator | 2026-03-24 05:04:12.443787 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 05:04:12.443793 | orchestrator | Tuesday 24 March 2026 05:03:46 +0000 (0:00:00.757) 0:14:27.644 ********* 2026-03-24 05:04:12.443799 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:04:12.443805 | orchestrator | 2026-03-24 05:04:12.443812 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-24 05:04:12.443818 | orchestrator | 2026-03-24 05:04:12.443824 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-24 05:04:12.443830 | orchestrator | Tuesday 24 March 2026 05:03:47 +0000 (0:00:00.953) 0:14:28.597 ********* 2026-03-24 05:04:12.443836 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.443842 | orchestrator | 2026-03-24 05:04:12.443848 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-24 05:04:12.443895 | orchestrator | Tuesday 24 March 2026 05:03:48 +0000 (0:00:01.140) 0:14:29.738 ********* 2026-03-24 05:04:12.443910 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.443921 | orchestrator | 2026-03-24 05:04:12.443931 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-24 05:04:12.443942 | orchestrator | Tuesday 24 March 2026 05:03:49 +0000 (0:00:00.800) 0:14:30.538 ********* 2026-03-24 05:04:12.443951 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:12.443961 | orchestrator | 2026-03-24 05:04:12.443972 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-24 05:04:12.443982 | orchestrator | Tuesday 24 March 2026 05:03:50 +0000 (0:00:00.766) 0:14:31.304 ********* 2026-03-24 05:04:12.443993 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444004 | orchestrator | 2026-03-24 05:04:12.444013 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:04:12.444024 | orchestrator | Tuesday 24 March 2026 05:03:51 +0000 (0:00:00.760) 0:14:32.065 ********* 2026-03-24 05:04:12.444037 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-24 05:04:12.444048 | orchestrator | 2026-03-24 05:04:12.444058 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:04:12.444068 | orchestrator | Tuesday 24 March 2026 05:03:52 +0000 (0:00:01.109) 0:14:33.174 ********* 2026-03-24 05:04:12.444078 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444090 | orchestrator | 2026-03-24 05:04:12.444101 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:04:12.444112 | orchestrator | Tuesday 24 March 2026 05:03:53 +0000 (0:00:01.465) 0:14:34.640 ********* 2026-03-24 05:04:12.444123 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444140 | orchestrator | 2026-03-24 05:04:12.444147 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:04:12.444155 | orchestrator | Tuesday 24 March 2026 05:03:54 +0000 (0:00:01.123) 0:14:35.763 ********* 2026-03-24 05:04:12.444162 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444169 | orchestrator | 2026-03-24 05:04:12.444192 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:04:12.444200 | orchestrator | Tuesday 24 March 2026 05:03:56 +0000 (0:00:01.436) 0:14:37.200 ********* 2026-03-24 05:04:12.444208 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444215 | orchestrator | 2026-03-24 05:04:12.444222 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:04:12.444230 | orchestrator | Tuesday 24 March 2026 05:03:57 +0000 (0:00:01.106) 0:14:38.307 ********* 2026-03-24 05:04:12.444237 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444244 | orchestrator | 2026-03-24 05:04:12.444251 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:04:12.444258 | orchestrator | Tuesday 24 March 2026 05:03:58 +0000 (0:00:01.134) 0:14:39.441 ********* 2026-03-24 05:04:12.444264 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444270 | orchestrator | 2026-03-24 05:04:12.444277 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:04:12.444283 | orchestrator | Tuesday 24 March 2026 05:03:59 +0000 (0:00:01.142) 0:14:40.584 ********* 2026-03-24 05:04:12.444289 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:12.444295 | orchestrator | 2026-03-24 05:04:12.444301 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:04:12.444308 | orchestrator | Tuesday 24 March 2026 05:04:00 +0000 (0:00:01.116) 0:14:41.700 ********* 2026-03-24 05:04:12.444314 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444320 | orchestrator | 2026-03-24 05:04:12.444326 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:04:12.444332 | orchestrator | Tuesday 24 March 2026 05:04:01 +0000 (0:00:01.098) 0:14:42.799 ********* 2026-03-24 05:04:12.444338 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:04:12.444344 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:04:12.444351 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:04:12.444357 | orchestrator | 2026-03-24 05:04:12.444363 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:04:12.444369 | orchestrator | Tuesday 24 March 2026 05:04:03 +0000 (0:00:01.926) 0:14:44.726 ********* 2026-03-24 05:04:12.444375 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:12.444381 | orchestrator | 2026-03-24 05:04:12.444388 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:04:12.444394 | orchestrator | Tuesday 24 March 2026 05:04:05 +0000 (0:00:01.240) 0:14:45.966 ********* 2026-03-24 05:04:12.444400 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:04:12.444406 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:04:12.444412 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:04:12.444418 | orchestrator | 2026-03-24 05:04:12.444425 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:04:12.444431 | orchestrator | Tuesday 24 March 2026 05:04:08 +0000 (0:00:03.131) 0:14:49.098 ********* 2026-03-24 05:04:12.444437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 05:04:12.444443 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 05:04:12.444449 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 05:04:12.444455 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:12.444462 | orchestrator | 2026-03-24 05:04:12.444468 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:04:12.444482 | orchestrator | Tuesday 24 March 2026 05:04:09 +0000 (0:00:01.389) 0:14:50.488 ********* 2026-03-24 05:04:12.444496 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:04:12.444505 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:04:12.444511 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:04:12.444518 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:12.444524 | orchestrator | 2026-03-24 05:04:12.444530 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:04:12.444536 | orchestrator | Tuesday 24 March 2026 05:04:11 +0000 (0:00:01.653) 0:14:52.142 ********* 2026-03-24 05:04:12.444545 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:12.444559 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:31.581923 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:31.582118 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582140 | orchestrator | 2026-03-24 05:04:31.582153 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:04:31.582167 | orchestrator | Tuesday 24 March 2026 05:04:12 +0000 (0:00:01.190) 0:14:53.332 ********* 2026-03-24 05:04:31.582180 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:04:05.859315', 'end': '2026-03-24 05:04:05.910971', 'delta': '0:00:00.051656', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:04:31.582195 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:04:06.402957', 'end': '2026-03-24 05:04:06.461449', 'delta': '0:00:00.058492', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:04:31.582246 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'cce21668b5d2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:04:06.978483', 'end': '2026-03-24 05:04:07.034648', 'delta': '0:00:00.056165', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cce21668b5d2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:04:31.582259 | orchestrator | 2026-03-24 05:04:31.582271 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:04:31.582282 | orchestrator | Tuesday 24 March 2026 05:04:13 +0000 (0:00:01.157) 0:14:54.490 ********* 2026-03-24 05:04:31.582293 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:31.582304 | orchestrator | 2026-03-24 05:04:31.582315 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:04:31.582326 | orchestrator | Tuesday 24 March 2026 05:04:14 +0000 (0:00:01.236) 0:14:55.727 ********* 2026-03-24 05:04:31.582337 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582348 | orchestrator | 2026-03-24 05:04:31.582358 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:04:31.582369 | orchestrator | Tuesday 24 March 2026 05:04:16 +0000 (0:00:01.212) 0:14:56.940 ********* 2026-03-24 05:04:31.582380 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:31.582392 | orchestrator | 2026-03-24 05:04:31.582405 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:04:31.582418 | orchestrator | Tuesday 24 March 2026 05:04:17 +0000 (0:00:01.122) 0:14:58.062 ********* 2026-03-24 05:04:31.582431 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-03-24 05:04:31.582443 | orchestrator | 2026-03-24 05:04:31.582455 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:04:31.582468 | orchestrator | Tuesday 24 March 2026 05:04:19 +0000 (0:00:01.959) 0:15:00.021 ********* 2026-03-24 05:04:31.582480 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:04:31.582493 | orchestrator | 2026-03-24 05:04:31.582505 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:04:31.582518 | orchestrator | Tuesday 24 March 2026 05:04:20 +0000 (0:00:01.163) 0:15:01.185 ********* 2026-03-24 05:04:31.582550 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582561 | orchestrator | 2026-03-24 05:04:31.582572 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:04:31.582583 | orchestrator | Tuesday 24 March 2026 05:04:21 +0000 (0:00:01.103) 0:15:02.289 ********* 2026-03-24 05:04:31.582594 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582605 | orchestrator | 2026-03-24 05:04:31.582616 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:04:31.582627 | orchestrator | Tuesday 24 March 2026 05:04:22 +0000 (0:00:01.214) 0:15:03.503 ********* 2026-03-24 05:04:31.582637 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582660 | orchestrator | 2026-03-24 05:04:31.582671 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:04:31.582682 | orchestrator | Tuesday 24 March 2026 05:04:23 +0000 (0:00:01.104) 0:15:04.608 ********* 2026-03-24 05:04:31.582693 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582703 | orchestrator | 2026-03-24 05:04:31.582714 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:04:31.582733 | orchestrator | Tuesday 24 March 2026 05:04:24 +0000 (0:00:01.100) 0:15:05.708 ********* 2026-03-24 05:04:31.582744 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582754 | orchestrator | 2026-03-24 05:04:31.582765 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:04:31.582776 | orchestrator | Tuesday 24 March 2026 05:04:25 +0000 (0:00:01.097) 0:15:06.806 ********* 2026-03-24 05:04:31.582786 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582797 | orchestrator | 2026-03-24 05:04:31.582808 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:04:31.582818 | orchestrator | Tuesday 24 March 2026 05:04:26 +0000 (0:00:01.084) 0:15:07.890 ********* 2026-03-24 05:04:31.582829 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582839 | orchestrator | 2026-03-24 05:04:31.582850 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:04:31.582964 | orchestrator | Tuesday 24 March 2026 05:04:28 +0000 (0:00:01.110) 0:15:09.001 ********* 2026-03-24 05:04:31.582978 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.582988 | orchestrator | 2026-03-24 05:04:31.582999 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:04:31.583011 | orchestrator | Tuesday 24 March 2026 05:04:29 +0000 (0:00:01.112) 0:15:10.114 ********* 2026-03-24 05:04:31.583022 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:31.583032 | orchestrator | 2026-03-24 05:04:31.583043 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:04:31.583054 | orchestrator | Tuesday 24 March 2026 05:04:30 +0000 (0:00:01.107) 0:15:11.221 ********* 2026-03-24 05:04:31.583065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:31.583084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:31.583096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:31.583109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:04:31.583122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:31.583150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:32.743524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:32.743646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4fc154b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:04:32.743666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:32.743676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:04:32.743685 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:04:32.743713 | orchestrator | 2026-03-24 05:04:32.743723 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:04:32.743732 | orchestrator | Tuesday 24 March 2026 05:04:31 +0000 (0:00:01.235) 0:15:12.457 ********* 2026-03-24 05:04:32.743757 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:32.743768 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:32.743776 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:32.743785 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:32.743800 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:32.743808 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:32.743851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:04:32.743924 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4fc154b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:05:03.525464 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:05:03.525564 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:05:03.525598 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.525609 | orchestrator | 2026-03-24 05:05:03.525618 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:05:03.525626 | orchestrator | Tuesday 24 March 2026 05:04:32 +0000 (0:00:01.176) 0:15:13.633 ********* 2026-03-24 05:05:03.525634 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:03.525642 | orchestrator | 2026-03-24 05:05:03.525650 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:05:03.525657 | orchestrator | Tuesday 24 March 2026 05:04:34 +0000 (0:00:01.469) 0:15:15.102 ********* 2026-03-24 05:05:03.525665 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:03.525672 | orchestrator | 2026-03-24 05:05:03.525679 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:05:03.525687 | orchestrator | Tuesday 24 March 2026 05:04:35 +0000 (0:00:01.100) 0:15:16.203 ********* 2026-03-24 05:05:03.525694 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:03.525701 | orchestrator | 2026-03-24 05:05:03.525708 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:05:03.525716 | orchestrator | Tuesday 24 March 2026 05:04:36 +0000 (0:00:01.457) 0:15:17.660 ********* 2026-03-24 05:05:03.525723 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.525730 | orchestrator | 2026-03-24 05:05:03.525737 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:05:03.525745 | orchestrator | Tuesday 24 March 2026 05:04:37 +0000 (0:00:01.130) 0:15:18.791 ********* 2026-03-24 05:05:03.525752 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.525759 | orchestrator | 2026-03-24 05:05:03.525766 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:05:03.525774 | orchestrator | Tuesday 24 March 2026 05:04:39 +0000 (0:00:01.254) 0:15:20.046 ********* 2026-03-24 05:05:03.525781 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.525788 | orchestrator | 2026-03-24 05:05:03.525795 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:05:03.525803 | orchestrator | Tuesday 24 March 2026 05:04:40 +0000 (0:00:01.133) 0:15:21.179 ********* 2026-03-24 05:05:03.525810 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-24 05:05:03.525818 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-24 05:05:03.525826 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:05:03.525833 | orchestrator | 2026-03-24 05:05:03.525840 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:05:03.525890 | orchestrator | Tuesday 24 March 2026 05:04:41 +0000 (0:00:01.659) 0:15:22.838 ********* 2026-03-24 05:05:03.525899 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 05:05:03.525907 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 05:05:03.525914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 05:05:03.525921 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.525928 | orchestrator | 2026-03-24 05:05:03.525936 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:05:03.525943 | orchestrator | Tuesday 24 March 2026 05:04:43 +0000 (0:00:01.146) 0:15:23.984 ********* 2026-03-24 05:05:03.525950 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.525958 | orchestrator | 2026-03-24 05:05:03.525965 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:05:03.525972 | orchestrator | Tuesday 24 March 2026 05:04:44 +0000 (0:00:01.110) 0:15:25.095 ********* 2026-03-24 05:05:03.525980 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:05:03.525995 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:05:03.526003 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:05:03.526010 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:05:03.526066 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:05:03.526087 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:05:03.526110 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:05:03.526119 | orchestrator | 2026-03-24 05:05:03.526128 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:05:03.526137 | orchestrator | Tuesday 24 March 2026 05:04:46 +0000 (0:00:01.818) 0:15:26.913 ********* 2026-03-24 05:05:03.526145 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:05:03.526154 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:05:03.526163 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:05:03.526172 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:05:03.526181 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:05:03.526190 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:05:03.526198 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:05:03.526210 | orchestrator | 2026-03-24 05:05:03.526222 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-24 05:05:03.526236 | orchestrator | Tuesday 24 March 2026 05:04:48 +0000 (0:00:02.149) 0:15:29.063 ********* 2026-03-24 05:05:03.526249 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.526260 | orchestrator | 2026-03-24 05:05:03.526272 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-24 05:05:03.526285 | orchestrator | Tuesday 24 March 2026 05:04:49 +0000 (0:00:00.889) 0:15:29.953 ********* 2026-03-24 05:05:03.526296 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.526309 | orchestrator | 2026-03-24 05:05:03.526321 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-24 05:05:03.526334 | orchestrator | Tuesday 24 March 2026 05:04:49 +0000 (0:00:00.847) 0:15:30.800 ********* 2026-03-24 05:05:03.526347 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.526359 | orchestrator | 2026-03-24 05:05:03.526372 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-24 05:05:03.526384 | orchestrator | Tuesday 24 March 2026 05:04:50 +0000 (0:00:00.813) 0:15:31.614 ********* 2026-03-24 05:05:03.526395 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.526407 | orchestrator | 2026-03-24 05:05:03.526420 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-24 05:05:03.526432 | orchestrator | Tuesday 24 March 2026 05:04:51 +0000 (0:00:00.875) 0:15:32.489 ********* 2026-03-24 05:05:03.526445 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.526457 | orchestrator | 2026-03-24 05:05:03.526470 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-24 05:05:03.526483 | orchestrator | Tuesday 24 March 2026 05:04:52 +0000 (0:00:00.770) 0:15:33.260 ********* 2026-03-24 05:05:03.526494 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 05:05:03.526501 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 05:05:03.526509 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 05:05:03.526516 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.526523 | orchestrator | 2026-03-24 05:05:03.526530 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-24 05:05:03.526545 | orchestrator | Tuesday 24 March 2026 05:04:53 +0000 (0:00:01.374) 0:15:34.635 ********* 2026-03-24 05:05:03.526553 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-24 05:05:03.526560 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-24 05:05:03.526567 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-24 05:05:03.526575 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-24 05:05:03.526582 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-24 05:05:03.526589 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-24 05:05:03.526596 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:03.526603 | orchestrator | 2026-03-24 05:05:03.526610 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-24 05:05:03.526618 | orchestrator | Tuesday 24 March 2026 05:04:55 +0000 (0:00:01.590) 0:15:36.226 ********* 2026-03-24 05:05:03.526625 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:05:03.526632 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:05:03.526639 | orchestrator | 2026-03-24 05:05:03.526647 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-24 05:05:03.526654 | orchestrator | Tuesday 24 March 2026 05:04:59 +0000 (0:00:03.864) 0:15:40.090 ********* 2026-03-24 05:05:03.526661 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:05:03.526668 | orchestrator | 2026-03-24 05:05:03.526675 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:05:03.526683 | orchestrator | Tuesday 24 March 2026 05:05:01 +0000 (0:00:02.133) 0:15:42.223 ********* 2026-03-24 05:05:03.526690 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-24 05:05:03.526698 | orchestrator | 2026-03-24 05:05:03.526705 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:05:03.526713 | orchestrator | Tuesday 24 March 2026 05:05:02 +0000 (0:00:01.085) 0:15:43.308 ********* 2026-03-24 05:05:03.526720 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-24 05:05:03.526727 | orchestrator | 2026-03-24 05:05:03.526741 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:05:03.526755 | orchestrator | Tuesday 24 March 2026 05:05:03 +0000 (0:00:01.102) 0:15:44.411 ********* 2026-03-24 05:05:45.501541 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.501634 | orchestrator | 2026-03-24 05:05:45.501646 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:05:45.501654 | orchestrator | Tuesday 24 March 2026 05:05:05 +0000 (0:00:01.512) 0:15:45.924 ********* 2026-03-24 05:05:45.501661 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.501673 | orchestrator | 2026-03-24 05:05:45.501684 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:05:45.501692 | orchestrator | Tuesday 24 March 2026 05:05:06 +0000 (0:00:01.104) 0:15:47.029 ********* 2026-03-24 05:05:45.501699 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.501706 | orchestrator | 2026-03-24 05:05:45.501713 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:05:45.501719 | orchestrator | Tuesday 24 March 2026 05:05:07 +0000 (0:00:01.101) 0:15:48.131 ********* 2026-03-24 05:05:45.501726 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.501733 | orchestrator | 2026-03-24 05:05:45.501740 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:05:45.501746 | orchestrator | Tuesday 24 March 2026 05:05:08 +0000 (0:00:01.140) 0:15:49.271 ********* 2026-03-24 05:05:45.501753 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.501760 | orchestrator | 2026-03-24 05:05:45.501766 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:05:45.501793 | orchestrator | Tuesday 24 March 2026 05:05:09 +0000 (0:00:01.554) 0:15:50.826 ********* 2026-03-24 05:05:45.501800 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.501807 | orchestrator | 2026-03-24 05:05:45.501813 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:05:45.501820 | orchestrator | Tuesday 24 March 2026 05:05:11 +0000 (0:00:01.113) 0:15:51.939 ********* 2026-03-24 05:05:45.501826 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.501833 | orchestrator | 2026-03-24 05:05:45.501865 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:05:45.501872 | orchestrator | Tuesday 24 March 2026 05:05:12 +0000 (0:00:01.094) 0:15:53.034 ********* 2026-03-24 05:05:45.501878 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.501887 | orchestrator | 2026-03-24 05:05:45.501899 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:05:45.501906 | orchestrator | Tuesday 24 March 2026 05:05:13 +0000 (0:00:01.817) 0:15:54.851 ********* 2026-03-24 05:05:45.501913 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.501919 | orchestrator | 2026-03-24 05:05:45.501926 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:05:45.501933 | orchestrator | Tuesday 24 March 2026 05:05:15 +0000 (0:00:01.580) 0:15:56.432 ********* 2026-03-24 05:05:45.501940 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.501946 | orchestrator | 2026-03-24 05:05:45.501953 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:05:45.501960 | orchestrator | Tuesday 24 March 2026 05:05:16 +0000 (0:00:00.749) 0:15:57.181 ********* 2026-03-24 05:05:45.501967 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.501973 | orchestrator | 2026-03-24 05:05:45.501980 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:05:45.501987 | orchestrator | Tuesday 24 March 2026 05:05:17 +0000 (0:00:00.786) 0:15:57.968 ********* 2026-03-24 05:05:45.501994 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502000 | orchestrator | 2026-03-24 05:05:45.502008 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:05:45.502014 | orchestrator | Tuesday 24 March 2026 05:05:17 +0000 (0:00:00.776) 0:15:58.745 ********* 2026-03-24 05:05:45.502061 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502068 | orchestrator | 2026-03-24 05:05:45.502075 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:05:45.502082 | orchestrator | Tuesday 24 March 2026 05:05:18 +0000 (0:00:00.764) 0:15:59.509 ********* 2026-03-24 05:05:45.502088 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502096 | orchestrator | 2026-03-24 05:05:45.502109 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:05:45.502118 | orchestrator | Tuesday 24 March 2026 05:05:19 +0000 (0:00:00.747) 0:16:00.257 ********* 2026-03-24 05:05:45.502126 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502134 | orchestrator | 2026-03-24 05:05:45.502142 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:05:45.502149 | orchestrator | Tuesday 24 March 2026 05:05:20 +0000 (0:00:00.784) 0:16:01.041 ********* 2026-03-24 05:05:45.502156 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502164 | orchestrator | 2026-03-24 05:05:45.502172 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:05:45.502179 | orchestrator | Tuesday 24 March 2026 05:05:20 +0000 (0:00:00.817) 0:16:01.859 ********* 2026-03-24 05:05:45.502187 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.502194 | orchestrator | 2026-03-24 05:05:45.502202 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:05:45.502209 | orchestrator | Tuesday 24 March 2026 05:05:21 +0000 (0:00:00.804) 0:16:02.663 ********* 2026-03-24 05:05:45.502217 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.502224 | orchestrator | 2026-03-24 05:05:45.502232 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:05:45.502246 | orchestrator | Tuesday 24 March 2026 05:05:22 +0000 (0:00:00.778) 0:16:03.442 ********* 2026-03-24 05:05:45.502254 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.502261 | orchestrator | 2026-03-24 05:05:45.502269 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:05:45.502276 | orchestrator | Tuesday 24 March 2026 05:05:23 +0000 (0:00:00.800) 0:16:04.242 ********* 2026-03-24 05:05:45.502284 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502292 | orchestrator | 2026-03-24 05:05:45.502300 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:05:45.502308 | orchestrator | Tuesday 24 March 2026 05:05:24 +0000 (0:00:00.755) 0:16:04.998 ********* 2026-03-24 05:05:45.502328 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502340 | orchestrator | 2026-03-24 05:05:45.502347 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:05:45.502368 | orchestrator | Tuesday 24 March 2026 05:05:24 +0000 (0:00:00.772) 0:16:05.770 ********* 2026-03-24 05:05:45.502375 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502382 | orchestrator | 2026-03-24 05:05:45.502388 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:05:45.502395 | orchestrator | Tuesday 24 March 2026 05:05:25 +0000 (0:00:00.776) 0:16:06.546 ********* 2026-03-24 05:05:45.502402 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502408 | orchestrator | 2026-03-24 05:05:45.502415 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:05:45.502422 | orchestrator | Tuesday 24 March 2026 05:05:26 +0000 (0:00:00.748) 0:16:07.294 ********* 2026-03-24 05:05:45.502428 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502435 | orchestrator | 2026-03-24 05:05:45.502442 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:05:45.502448 | orchestrator | Tuesday 24 March 2026 05:05:27 +0000 (0:00:00.758) 0:16:08.053 ********* 2026-03-24 05:05:45.502455 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502462 | orchestrator | 2026-03-24 05:05:45.502468 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:05:45.502475 | orchestrator | Tuesday 24 March 2026 05:05:27 +0000 (0:00:00.764) 0:16:08.818 ********* 2026-03-24 05:05:45.502482 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502488 | orchestrator | 2026-03-24 05:05:45.502495 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:05:45.502503 | orchestrator | Tuesday 24 March 2026 05:05:28 +0000 (0:00:00.766) 0:16:09.584 ********* 2026-03-24 05:05:45.502509 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502516 | orchestrator | 2026-03-24 05:05:45.502523 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:05:45.502529 | orchestrator | Tuesday 24 March 2026 05:05:29 +0000 (0:00:00.749) 0:16:10.333 ********* 2026-03-24 05:05:45.502536 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502543 | orchestrator | 2026-03-24 05:05:45.502552 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:05:45.502562 | orchestrator | Tuesday 24 March 2026 05:05:30 +0000 (0:00:00.772) 0:16:11.106 ********* 2026-03-24 05:05:45.502569 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502575 | orchestrator | 2026-03-24 05:05:45.502582 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:05:45.502589 | orchestrator | Tuesday 24 March 2026 05:05:31 +0000 (0:00:00.810) 0:16:11.916 ********* 2026-03-24 05:05:45.502595 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502602 | orchestrator | 2026-03-24 05:05:45.502609 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:05:45.502615 | orchestrator | Tuesday 24 March 2026 05:05:31 +0000 (0:00:00.757) 0:16:12.674 ********* 2026-03-24 05:05:45.502622 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502629 | orchestrator | 2026-03-24 05:05:45.502635 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:05:45.502648 | orchestrator | Tuesday 24 March 2026 05:05:32 +0000 (0:00:00.767) 0:16:13.441 ********* 2026-03-24 05:05:45.502654 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.502661 | orchestrator | 2026-03-24 05:05:45.502667 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:05:45.502674 | orchestrator | Tuesday 24 March 2026 05:05:34 +0000 (0:00:01.726) 0:16:15.168 ********* 2026-03-24 05:05:45.502681 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.502687 | orchestrator | 2026-03-24 05:05:45.502694 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:05:45.502701 | orchestrator | Tuesday 24 March 2026 05:05:36 +0000 (0:00:02.015) 0:16:17.183 ********* 2026-03-24 05:05:45.502708 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-24 05:05:45.502716 | orchestrator | 2026-03-24 05:05:45.502722 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:05:45.502729 | orchestrator | Tuesday 24 March 2026 05:05:37 +0000 (0:00:01.094) 0:16:18.278 ********* 2026-03-24 05:05:45.502736 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502743 | orchestrator | 2026-03-24 05:05:45.502749 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:05:45.502756 | orchestrator | Tuesday 24 March 2026 05:05:38 +0000 (0:00:01.111) 0:16:19.389 ********* 2026-03-24 05:05:45.502763 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502769 | orchestrator | 2026-03-24 05:05:45.502776 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:05:45.502783 | orchestrator | Tuesday 24 March 2026 05:05:39 +0000 (0:00:01.111) 0:16:20.500 ********* 2026-03-24 05:05:45.502790 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:05:45.502796 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:05:45.502803 | orchestrator | 2026-03-24 05:05:45.502810 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:05:45.502817 | orchestrator | Tuesday 24 March 2026 05:05:41 +0000 (0:00:01.867) 0:16:22.368 ********* 2026-03-24 05:05:45.502823 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:05:45.502830 | orchestrator | 2026-03-24 05:05:45.502876 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:05:45.502883 | orchestrator | Tuesday 24 March 2026 05:05:42 +0000 (0:00:01.476) 0:16:23.845 ********* 2026-03-24 05:05:45.502890 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502897 | orchestrator | 2026-03-24 05:05:45.502903 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:05:45.502910 | orchestrator | Tuesday 24 March 2026 05:05:44 +0000 (0:00:01.088) 0:16:24.934 ********* 2026-03-24 05:05:45.502917 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:05:45.502923 | orchestrator | 2026-03-24 05:05:45.502934 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:05:45.502941 | orchestrator | Tuesday 24 March 2026 05:05:44 +0000 (0:00:00.714) 0:16:25.648 ********* 2026-03-24 05:05:45.502952 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.624157 | orchestrator | 2026-03-24 05:06:24.624295 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:06:24.624314 | orchestrator | Tuesday 24 March 2026 05:05:45 +0000 (0:00:00.741) 0:16:26.390 ********* 2026-03-24 05:06:24.624325 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-24 05:06:24.624338 | orchestrator | 2026-03-24 05:06:24.624349 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:06:24.624360 | orchestrator | Tuesday 24 March 2026 05:05:46 +0000 (0:00:01.066) 0:16:27.456 ********* 2026-03-24 05:06:24.624371 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:06:24.624383 | orchestrator | 2026-03-24 05:06:24.624414 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:06:24.624494 | orchestrator | Tuesday 24 March 2026 05:05:48 +0000 (0:00:01.747) 0:16:29.204 ********* 2026-03-24 05:06:24.624507 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:06:24.624519 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:06:24.624530 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:06:24.624540 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.624552 | orchestrator | 2026-03-24 05:06:24.624563 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:06:24.624589 | orchestrator | Tuesday 24 March 2026 05:05:49 +0000 (0:00:01.115) 0:16:30.320 ********* 2026-03-24 05:06:24.624600 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.624611 | orchestrator | 2026-03-24 05:06:24.624633 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:06:24.624644 | orchestrator | Tuesday 24 March 2026 05:05:50 +0000 (0:00:01.101) 0:16:31.421 ********* 2026-03-24 05:06:24.624654 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.624665 | orchestrator | 2026-03-24 05:06:24.624676 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:06:24.624687 | orchestrator | Tuesday 24 March 2026 05:05:51 +0000 (0:00:01.062) 0:16:32.485 ********* 2026-03-24 05:06:24.624699 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.624712 | orchestrator | 2026-03-24 05:06:24.624725 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:06:24.624738 | orchestrator | Tuesday 24 March 2026 05:05:52 +0000 (0:00:00.920) 0:16:33.405 ********* 2026-03-24 05:06:24.624750 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.624765 | orchestrator | 2026-03-24 05:06:24.624784 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:06:24.624803 | orchestrator | Tuesday 24 March 2026 05:05:53 +0000 (0:00:01.040) 0:16:34.445 ********* 2026-03-24 05:06:24.624825 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.624906 | orchestrator | 2026-03-24 05:06:24.624924 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:06:24.624942 | orchestrator | Tuesday 24 March 2026 05:05:54 +0000 (0:00:00.729) 0:16:35.175 ********* 2026-03-24 05:06:24.624958 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:06:24.624975 | orchestrator | 2026-03-24 05:06:24.624993 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:06:24.625011 | orchestrator | Tuesday 24 March 2026 05:05:56 +0000 (0:00:02.292) 0:16:37.467 ********* 2026-03-24 05:06:24.625029 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:06:24.625048 | orchestrator | 2026-03-24 05:06:24.625066 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:06:24.625086 | orchestrator | Tuesday 24 March 2026 05:05:57 +0000 (0:00:00.746) 0:16:38.214 ********* 2026-03-24 05:06:24.625099 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-24 05:06:24.625109 | orchestrator | 2026-03-24 05:06:24.625120 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:06:24.625130 | orchestrator | Tuesday 24 March 2026 05:05:58 +0000 (0:00:01.108) 0:16:39.322 ********* 2026-03-24 05:06:24.625141 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625152 | orchestrator | 2026-03-24 05:06:24.625163 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:06:24.625173 | orchestrator | Tuesday 24 March 2026 05:05:59 +0000 (0:00:01.146) 0:16:40.469 ********* 2026-03-24 05:06:24.625184 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625195 | orchestrator | 2026-03-24 05:06:24.625205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:06:24.625216 | orchestrator | Tuesday 24 March 2026 05:06:00 +0000 (0:00:01.141) 0:16:41.610 ********* 2026-03-24 05:06:24.625227 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625252 | orchestrator | 2026-03-24 05:06:24.625263 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:06:24.625273 | orchestrator | Tuesday 24 March 2026 05:06:01 +0000 (0:00:01.158) 0:16:42.769 ********* 2026-03-24 05:06:24.625284 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625294 | orchestrator | 2026-03-24 05:06:24.625305 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:06:24.625316 | orchestrator | Tuesday 24 March 2026 05:06:02 +0000 (0:00:01.113) 0:16:43.883 ********* 2026-03-24 05:06:24.625326 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625337 | orchestrator | 2026-03-24 05:06:24.625348 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:06:24.625358 | orchestrator | Tuesday 24 March 2026 05:06:04 +0000 (0:00:01.181) 0:16:45.064 ********* 2026-03-24 05:06:24.625369 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625379 | orchestrator | 2026-03-24 05:06:24.625390 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:06:24.625400 | orchestrator | Tuesday 24 March 2026 05:06:05 +0000 (0:00:01.170) 0:16:46.234 ********* 2026-03-24 05:06:24.625425 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625437 | orchestrator | 2026-03-24 05:06:24.625448 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:06:24.625478 | orchestrator | Tuesday 24 March 2026 05:06:06 +0000 (0:00:01.111) 0:16:47.346 ********* 2026-03-24 05:06:24.625489 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625500 | orchestrator | 2026-03-24 05:06:24.625511 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:06:24.625521 | orchestrator | Tuesday 24 March 2026 05:06:07 +0000 (0:00:01.124) 0:16:48.471 ********* 2026-03-24 05:06:24.625532 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:06:24.625543 | orchestrator | 2026-03-24 05:06:24.625554 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:06:24.625565 | orchestrator | Tuesday 24 March 2026 05:06:08 +0000 (0:00:00.781) 0:16:49.252 ********* 2026-03-24 05:06:24.625575 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-24 05:06:24.625586 | orchestrator | 2026-03-24 05:06:24.625597 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:06:24.625608 | orchestrator | Tuesday 24 March 2026 05:06:09 +0000 (0:00:01.104) 0:16:50.357 ********* 2026-03-24 05:06:24.625618 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-24 05:06:24.625630 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-24 05:06:24.625640 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-24 05:06:24.625651 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-24 05:06:24.625661 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-24 05:06:24.625672 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-24 05:06:24.625683 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-24 05:06:24.625693 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:06:24.625704 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:06:24.625715 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:06:24.625725 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:06:24.625736 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:06:24.625747 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:06:24.625758 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:06:24.625768 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-24 05:06:24.625779 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-24 05:06:24.625790 | orchestrator | 2026-03-24 05:06:24.625800 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:06:24.625818 | orchestrator | Tuesday 24 March 2026 05:06:16 +0000 (0:00:06.567) 0:16:56.924 ********* 2026-03-24 05:06:24.625851 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625863 | orchestrator | 2026-03-24 05:06:24.625874 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:06:24.625884 | orchestrator | Tuesday 24 March 2026 05:06:16 +0000 (0:00:00.778) 0:16:57.702 ********* 2026-03-24 05:06:24.625895 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625906 | orchestrator | 2026-03-24 05:06:24.625917 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:06:24.625928 | orchestrator | Tuesday 24 March 2026 05:06:17 +0000 (0:00:00.769) 0:16:58.472 ********* 2026-03-24 05:06:24.625938 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625949 | orchestrator | 2026-03-24 05:06:24.625960 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:06:24.625971 | orchestrator | Tuesday 24 March 2026 05:06:18 +0000 (0:00:00.830) 0:16:59.302 ********* 2026-03-24 05:06:24.625981 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.625992 | orchestrator | 2026-03-24 05:06:24.626003 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:06:24.626072 | orchestrator | Tuesday 24 March 2026 05:06:19 +0000 (0:00:00.772) 0:17:00.075 ********* 2026-03-24 05:06:24.626085 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.626096 | orchestrator | 2026-03-24 05:06:24.626107 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:06:24.626117 | orchestrator | Tuesday 24 March 2026 05:06:19 +0000 (0:00:00.778) 0:17:00.853 ********* 2026-03-24 05:06:24.626128 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.626139 | orchestrator | 2026-03-24 05:06:24.626150 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:06:24.626161 | orchestrator | Tuesday 24 March 2026 05:06:20 +0000 (0:00:00.815) 0:17:01.669 ********* 2026-03-24 05:06:24.626171 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.626182 | orchestrator | 2026-03-24 05:06:24.626193 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:06:24.626204 | orchestrator | Tuesday 24 March 2026 05:06:21 +0000 (0:00:00.792) 0:17:02.462 ********* 2026-03-24 05:06:24.626214 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.626225 | orchestrator | 2026-03-24 05:06:24.626236 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:06:24.626247 | orchestrator | Tuesday 24 March 2026 05:06:22 +0000 (0:00:00.755) 0:17:03.217 ********* 2026-03-24 05:06:24.626257 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.626268 | orchestrator | 2026-03-24 05:06:24.626279 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:06:24.626289 | orchestrator | Tuesday 24 March 2026 05:06:23 +0000 (0:00:00.763) 0:17:03.980 ********* 2026-03-24 05:06:24.626300 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.626311 | orchestrator | 2026-03-24 05:06:24.626322 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:06:24.626338 | orchestrator | Tuesday 24 March 2026 05:06:23 +0000 (0:00:00.751) 0:17:04.731 ********* 2026-03-24 05:06:24.626350 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:06:24.626360 | orchestrator | 2026-03-24 05:06:24.626379 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:07:10.762075 | orchestrator | Tuesday 24 March 2026 05:06:24 +0000 (0:00:00.777) 0:17:05.509 ********* 2026-03-24 05:07:10.762197 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762215 | orchestrator | 2026-03-24 05:07:10.762228 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:07:10.762240 | orchestrator | Tuesday 24 March 2026 05:06:25 +0000 (0:00:00.766) 0:17:06.276 ********* 2026-03-24 05:07:10.762279 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762291 | orchestrator | 2026-03-24 05:07:10.762303 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:07:10.762314 | orchestrator | Tuesday 24 March 2026 05:06:26 +0000 (0:00:00.841) 0:17:07.118 ********* 2026-03-24 05:07:10.762325 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762336 | orchestrator | 2026-03-24 05:07:10.762346 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:07:10.762357 | orchestrator | Tuesday 24 March 2026 05:06:26 +0000 (0:00:00.771) 0:17:07.889 ********* 2026-03-24 05:07:10.762368 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762379 | orchestrator | 2026-03-24 05:07:10.762389 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:07:10.762401 | orchestrator | Tuesday 24 March 2026 05:06:27 +0000 (0:00:00.851) 0:17:08.741 ********* 2026-03-24 05:07:10.762412 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762423 | orchestrator | 2026-03-24 05:07:10.762433 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:07:10.762444 | orchestrator | Tuesday 24 March 2026 05:06:28 +0000 (0:00:00.769) 0:17:09.511 ********* 2026-03-24 05:07:10.762455 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762466 | orchestrator | 2026-03-24 05:07:10.762477 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:07:10.762490 | orchestrator | Tuesday 24 March 2026 05:06:29 +0000 (0:00:00.760) 0:17:10.272 ********* 2026-03-24 05:07:10.762500 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762511 | orchestrator | 2026-03-24 05:07:10.762522 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:07:10.762533 | orchestrator | Tuesday 24 March 2026 05:06:30 +0000 (0:00:00.782) 0:17:11.055 ********* 2026-03-24 05:07:10.762543 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762554 | orchestrator | 2026-03-24 05:07:10.762565 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:07:10.762576 | orchestrator | Tuesday 24 March 2026 05:06:30 +0000 (0:00:00.814) 0:17:11.870 ********* 2026-03-24 05:07:10.762586 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762597 | orchestrator | 2026-03-24 05:07:10.762608 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:07:10.762619 | orchestrator | Tuesday 24 March 2026 05:06:31 +0000 (0:00:00.800) 0:17:12.671 ********* 2026-03-24 05:07:10.762629 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762640 | orchestrator | 2026-03-24 05:07:10.762651 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:07:10.762662 | orchestrator | Tuesday 24 March 2026 05:06:32 +0000 (0:00:00.751) 0:17:13.423 ********* 2026-03-24 05:07:10.762673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:07:10.762697 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:07:10.762708 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:07:10.762719 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762730 | orchestrator | 2026-03-24 05:07:10.762741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:07:10.762752 | orchestrator | Tuesday 24 March 2026 05:06:33 +0000 (0:00:01.057) 0:17:14.480 ********* 2026-03-24 05:07:10.762762 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:07:10.762773 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:07:10.762784 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:07:10.762795 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762805 | orchestrator | 2026-03-24 05:07:10.762853 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:07:10.762873 | orchestrator | Tuesday 24 March 2026 05:06:34 +0000 (0:00:01.049) 0:17:15.530 ********* 2026-03-24 05:07:10.762909 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:07:10.762928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:07:10.762948 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:07:10.762967 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.762985 | orchestrator | 2026-03-24 05:07:10.763004 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:07:10.763016 | orchestrator | Tuesday 24 March 2026 05:06:35 +0000 (0:00:01.070) 0:17:16.601 ********* 2026-03-24 05:07:10.763027 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.763037 | orchestrator | 2026-03-24 05:07:10.763048 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:07:10.763059 | orchestrator | Tuesday 24 March 2026 05:06:36 +0000 (0:00:00.758) 0:17:17.360 ********* 2026-03-24 05:07:10.763070 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-24 05:07:10.763081 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.763092 | orchestrator | 2026-03-24 05:07:10.763103 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:07:10.763114 | orchestrator | Tuesday 24 March 2026 05:06:37 +0000 (0:00:00.872) 0:17:18.232 ********* 2026-03-24 05:07:10.763124 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:07:10.763135 | orchestrator | 2026-03-24 05:07:10.763146 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:07:10.763170 | orchestrator | Tuesday 24 March 2026 05:06:38 +0000 (0:00:01.404) 0:17:19.636 ********* 2026-03-24 05:07:10.763181 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763192 | orchestrator | 2026-03-24 05:07:10.763203 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-24 05:07:10.763234 | orchestrator | Tuesday 24 March 2026 05:06:39 +0000 (0:00:00.797) 0:17:20.434 ********* 2026-03-24 05:07:10.763246 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-03-24 05:07:10.763258 | orchestrator | 2026-03-24 05:07:10.763269 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-24 05:07:10.763280 | orchestrator | Tuesday 24 March 2026 05:06:40 +0000 (0:00:01.150) 0:17:21.585 ********* 2026-03-24 05:07:10.763290 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763301 | orchestrator | 2026-03-24 05:07:10.763312 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-24 05:07:10.763322 | orchestrator | Tuesday 24 March 2026 05:06:43 +0000 (0:00:03.240) 0:17:24.825 ********* 2026-03-24 05:07:10.763333 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.763344 | orchestrator | 2026-03-24 05:07:10.763354 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-24 05:07:10.763365 | orchestrator | Tuesday 24 March 2026 05:06:45 +0000 (0:00:01.180) 0:17:26.005 ********* 2026-03-24 05:07:10.763376 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763386 | orchestrator | 2026-03-24 05:07:10.763397 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-24 05:07:10.763408 | orchestrator | Tuesday 24 March 2026 05:06:46 +0000 (0:00:01.138) 0:17:27.144 ********* 2026-03-24 05:07:10.763418 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763429 | orchestrator | 2026-03-24 05:07:10.763440 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-24 05:07:10.763451 | orchestrator | Tuesday 24 March 2026 05:06:47 +0000 (0:00:01.177) 0:17:28.321 ********* 2026-03-24 05:07:10.763461 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:07:10.763472 | orchestrator | 2026-03-24 05:07:10.763483 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-24 05:07:10.763494 | orchestrator | Tuesday 24 March 2026 05:06:49 +0000 (0:00:02.107) 0:17:30.429 ********* 2026-03-24 05:07:10.763505 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763515 | orchestrator | 2026-03-24 05:07:10.763526 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-24 05:07:10.763537 | orchestrator | Tuesday 24 March 2026 05:06:51 +0000 (0:00:01.614) 0:17:32.044 ********* 2026-03-24 05:07:10.763556 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763567 | orchestrator | 2026-03-24 05:07:10.763578 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-24 05:07:10.763588 | orchestrator | Tuesday 24 March 2026 05:06:52 +0000 (0:00:01.473) 0:17:33.517 ********* 2026-03-24 05:07:10.763599 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763610 | orchestrator | 2026-03-24 05:07:10.763621 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-24 05:07:10.763631 | orchestrator | Tuesday 24 March 2026 05:06:54 +0000 (0:00:01.494) 0:17:35.012 ********* 2026-03-24 05:07:10.763642 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:07:10.763653 | orchestrator | 2026-03-24 05:07:10.763663 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-24 05:07:10.763674 | orchestrator | Tuesday 24 March 2026 05:06:55 +0000 (0:00:01.568) 0:17:36.580 ********* 2026-03-24 05:07:10.763685 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:07:10.763696 | orchestrator | 2026-03-24 05:07:10.763707 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-24 05:07:10.763717 | orchestrator | Tuesday 24 March 2026 05:06:57 +0000 (0:00:01.537) 0:17:38.117 ********* 2026-03-24 05:07:10.763728 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:07:10.763739 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-24 05:07:10.763749 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-24 05:07:10.763760 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-24 05:07:10.763771 | orchestrator | 2026-03-24 05:07:10.763782 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-24 05:07:10.763793 | orchestrator | Tuesday 24 March 2026 05:07:01 +0000 (0:00:04.107) 0:17:42.224 ********* 2026-03-24 05:07:10.763804 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:07:10.763814 | orchestrator | 2026-03-24 05:07:10.763859 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-24 05:07:10.763878 | orchestrator | Tuesday 24 March 2026 05:07:03 +0000 (0:00:02.036) 0:17:44.261 ********* 2026-03-24 05:07:10.763897 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763916 | orchestrator | 2026-03-24 05:07:10.763934 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-24 05:07:10.763953 | orchestrator | Tuesday 24 March 2026 05:07:04 +0000 (0:00:01.123) 0:17:45.384 ********* 2026-03-24 05:07:10.763973 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.763992 | orchestrator | 2026-03-24 05:07:10.764011 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-24 05:07:10.764030 | orchestrator | Tuesday 24 March 2026 05:07:05 +0000 (0:00:01.179) 0:17:46.564 ********* 2026-03-24 05:07:10.764042 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.764053 | orchestrator | 2026-03-24 05:07:10.764064 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-24 05:07:10.764075 | orchestrator | Tuesday 24 March 2026 05:07:07 +0000 (0:00:01.714) 0:17:48.278 ********* 2026-03-24 05:07:10.764085 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:07:10.764096 | orchestrator | 2026-03-24 05:07:10.764107 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-24 05:07:10.764118 | orchestrator | Tuesday 24 March 2026 05:07:08 +0000 (0:00:01.457) 0:17:49.736 ********* 2026-03-24 05:07:10.764128 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:07:10.764139 | orchestrator | 2026-03-24 05:07:10.764150 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-24 05:07:10.764168 | orchestrator | Tuesday 24 March 2026 05:07:09 +0000 (0:00:00.761) 0:17:50.498 ********* 2026-03-24 05:07:10.764179 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-03-24 05:07:10.764190 | orchestrator | 2026-03-24 05:07:10.764213 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-24 05:08:17.682007 | orchestrator | Tuesday 24 March 2026 05:07:10 +0000 (0:00:01.153) 0:17:51.652 ********* 2026-03-24 05:08:17.682177 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.682195 | orchestrator | 2026-03-24 05:08:17.682208 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-24 05:08:17.682220 | orchestrator | Tuesday 24 March 2026 05:07:11 +0000 (0:00:01.106) 0:17:52.758 ********* 2026-03-24 05:08:17.682231 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.682242 | orchestrator | 2026-03-24 05:08:17.682253 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-24 05:08:17.682264 | orchestrator | Tuesday 24 March 2026 05:07:12 +0000 (0:00:01.098) 0:17:53.857 ********* 2026-03-24 05:08:17.682276 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-03-24 05:08:17.682287 | orchestrator | 2026-03-24 05:08:17.682299 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-24 05:08:17.682336 | orchestrator | Tuesday 24 March 2026 05:07:14 +0000 (0:00:01.175) 0:17:55.032 ********* 2026-03-24 05:08:17.682369 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:08:17.682388 | orchestrator | 2026-03-24 05:08:17.682406 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-24 05:08:17.682423 | orchestrator | Tuesday 24 March 2026 05:07:16 +0000 (0:00:02.557) 0:17:57.590 ********* 2026-03-24 05:08:17.682441 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:08:17.682460 | orchestrator | 2026-03-24 05:08:17.682479 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-24 05:08:17.682498 | orchestrator | Tuesday 24 March 2026 05:07:18 +0000 (0:00:01.945) 0:17:59.536 ********* 2026-03-24 05:08:17.682516 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:08:17.682534 | orchestrator | 2026-03-24 05:08:17.682553 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-24 05:08:17.682570 | orchestrator | Tuesday 24 March 2026 05:07:21 +0000 (0:00:02.499) 0:18:02.036 ********* 2026-03-24 05:08:17.682582 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:08:17.682596 | orchestrator | 2026-03-24 05:08:17.682611 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-24 05:08:17.682624 | orchestrator | Tuesday 24 March 2026 05:07:24 +0000 (0:00:02.892) 0:18:04.928 ********* 2026-03-24 05:08:17.682638 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-03-24 05:08:17.682651 | orchestrator | 2026-03-24 05:08:17.682664 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-24 05:08:17.682676 | orchestrator | Tuesday 24 March 2026 05:07:25 +0000 (0:00:01.107) 0:18:06.035 ********* 2026-03-24 05:08:17.682689 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-24 05:08:17.682702 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:08:17.682721 | orchestrator | 2026-03-24 05:08:17.682744 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-24 05:08:17.682771 | orchestrator | Tuesday 24 March 2026 05:07:48 +0000 (0:00:22.994) 0:18:29.029 ********* 2026-03-24 05:08:17.682790 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:08:17.682861 | orchestrator | 2026-03-24 05:08:17.682881 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-24 05:08:17.682900 | orchestrator | Tuesday 24 March 2026 05:07:50 +0000 (0:00:02.702) 0:18:31.732 ********* 2026-03-24 05:08:17.682919 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.682938 | orchestrator | 2026-03-24 05:08:17.682957 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-24 05:08:17.682975 | orchestrator | Tuesday 24 March 2026 05:07:51 +0000 (0:00:00.753) 0:18:32.485 ********* 2026-03-24 05:08:17.682998 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-24 05:08:17.683055 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-24 05:08:17.683076 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-24 05:08:17.683113 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-24 05:08:17.683156 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-24 05:08:17.683170 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d623083d808c9f23428118548c4c166bdc31e202'}])  2026-03-24 05:08:17.683183 | orchestrator | 2026-03-24 05:08:17.683194 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-24 05:08:17.683206 | orchestrator | Tuesday 24 March 2026 05:08:01 +0000 (0:00:09.902) 0:18:42.388 ********* 2026-03-24 05:08:17.683216 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:08:17.683227 | orchestrator | 2026-03-24 05:08:17.683238 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:08:17.683249 | orchestrator | Tuesday 24 March 2026 05:08:03 +0000 (0:00:02.183) 0:18:44.571 ********* 2026-03-24 05:08:17.683260 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:08:17.683271 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-24 05:08:17.683281 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-24 05:08:17.683292 | orchestrator | 2026-03-24 05:08:17.683303 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:08:17.683313 | orchestrator | Tuesday 24 March 2026 05:08:05 +0000 (0:00:01.814) 0:18:46.386 ********* 2026-03-24 05:08:17.683324 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 05:08:17.683335 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 05:08:17.683346 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 05:08:17.683357 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683367 | orchestrator | 2026-03-24 05:08:17.683378 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-24 05:08:17.683389 | orchestrator | Tuesday 24 March 2026 05:08:06 +0000 (0:00:01.371) 0:18:47.757 ********* 2026-03-24 05:08:17.683400 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683420 | orchestrator | 2026-03-24 05:08:17.683431 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-24 05:08:17.683442 | orchestrator | Tuesday 24 March 2026 05:08:07 +0000 (0:00:00.765) 0:18:48.523 ********* 2026-03-24 05:08:17.683453 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:08:17.683464 | orchestrator | 2026-03-24 05:08:17.683475 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 05:08:17.683487 | orchestrator | Tuesday 24 March 2026 05:08:09 +0000 (0:00:01.973) 0:18:50.496 ********* 2026-03-24 05:08:17.683498 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683509 | orchestrator | 2026-03-24 05:08:17.683520 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 05:08:17.683530 | orchestrator | Tuesday 24 March 2026 05:08:10 +0000 (0:00:00.812) 0:18:51.309 ********* 2026-03-24 05:08:17.683541 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683552 | orchestrator | 2026-03-24 05:08:17.683563 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 05:08:17.683574 | orchestrator | Tuesday 24 March 2026 05:08:11 +0000 (0:00:00.797) 0:18:52.108 ********* 2026-03-24 05:08:17.683585 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683595 | orchestrator | 2026-03-24 05:08:17.683606 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 05:08:17.683617 | orchestrator | Tuesday 24 March 2026 05:08:12 +0000 (0:00:00.900) 0:18:53.009 ********* 2026-03-24 05:08:17.683628 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683639 | orchestrator | 2026-03-24 05:08:17.683650 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 05:08:17.683661 | orchestrator | Tuesday 24 March 2026 05:08:12 +0000 (0:00:00.739) 0:18:53.749 ********* 2026-03-24 05:08:17.683672 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683682 | orchestrator | 2026-03-24 05:08:17.683693 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-24 05:08:17.683704 | orchestrator | Tuesday 24 March 2026 05:08:13 +0000 (0:00:00.767) 0:18:54.516 ********* 2026-03-24 05:08:17.683715 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683726 | orchestrator | 2026-03-24 05:08:17.683737 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 05:08:17.683748 | orchestrator | Tuesday 24 March 2026 05:08:14 +0000 (0:00:00.768) 0:18:55.285 ********* 2026-03-24 05:08:17.683759 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:08:17.683770 | orchestrator | 2026-03-24 05:08:17.683780 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-03-24 05:08:17.683791 | orchestrator | 2026-03-24 05:08:17.683842 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-03-24 05:08:17.683864 | orchestrator | Tuesday 24 March 2026 05:08:16 +0000 (0:00:01.727) 0:18:57.013 ********* 2026-03-24 05:08:17.683883 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:08:17.683935 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:08:17.683948 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:08:17.683959 | orchestrator | 2026-03-24 05:08:17.683977 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-24 05:08:17.683989 | orchestrator | 2026-03-24 05:08:17.684000 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-24 05:08:17.684019 | orchestrator | Tuesday 24 March 2026 05:08:17 +0000 (0:00:01.547) 0:18:58.561 ********* 2026-03-24 05:09:02.704271 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704382 | orchestrator | 2026-03-24 05:09:02.704398 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:09:02.704410 | orchestrator | Tuesday 24 March 2026 05:08:18 +0000 (0:00:01.191) 0:18:59.752 ********* 2026-03-24 05:09:02.704421 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704431 | orchestrator | 2026-03-24 05:09:02.704442 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:09:02.704452 | orchestrator | Tuesday 24 March 2026 05:08:19 +0000 (0:00:01.146) 0:19:00.899 ********* 2026-03-24 05:09:02.704486 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704497 | orchestrator | 2026-03-24 05:09:02.704507 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:09:02.704517 | orchestrator | Tuesday 24 March 2026 05:08:21 +0000 (0:00:01.142) 0:19:02.042 ********* 2026-03-24 05:09:02.704527 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704536 | orchestrator | 2026-03-24 05:09:02.704546 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:09:02.704556 | orchestrator | Tuesday 24 March 2026 05:08:22 +0000 (0:00:01.125) 0:19:03.168 ********* 2026-03-24 05:09:02.704566 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704576 | orchestrator | 2026-03-24 05:09:02.704586 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:09:02.704595 | orchestrator | Tuesday 24 March 2026 05:08:23 +0000 (0:00:01.125) 0:19:04.293 ********* 2026-03-24 05:09:02.704605 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704615 | orchestrator | 2026-03-24 05:09:02.704625 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:09:02.704634 | orchestrator | Tuesday 24 March 2026 05:08:24 +0000 (0:00:01.106) 0:19:05.400 ********* 2026-03-24 05:09:02.704644 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704653 | orchestrator | 2026-03-24 05:09:02.704663 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:09:02.704673 | orchestrator | Tuesday 24 March 2026 05:08:25 +0000 (0:00:01.183) 0:19:06.583 ********* 2026-03-24 05:09:02.704683 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704692 | orchestrator | 2026-03-24 05:09:02.704702 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:09:02.704712 | orchestrator | Tuesday 24 March 2026 05:08:26 +0000 (0:00:01.092) 0:19:07.675 ********* 2026-03-24 05:09:02.704721 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704731 | orchestrator | 2026-03-24 05:09:02.704740 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:09:02.704750 | orchestrator | Tuesday 24 March 2026 05:08:27 +0000 (0:00:01.117) 0:19:08.792 ********* 2026-03-24 05:09:02.704760 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704769 | orchestrator | 2026-03-24 05:09:02.704779 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:09:02.704791 | orchestrator | Tuesday 24 March 2026 05:08:28 +0000 (0:00:01.095) 0:19:09.888 ********* 2026-03-24 05:09:02.704802 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704859 | orchestrator | 2026-03-24 05:09:02.704870 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:09:02.704882 | orchestrator | Tuesday 24 March 2026 05:08:30 +0000 (0:00:01.101) 0:19:10.989 ********* 2026-03-24 05:09:02.704894 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704907 | orchestrator | 2026-03-24 05:09:02.704918 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:09:02.704930 | orchestrator | Tuesday 24 March 2026 05:08:31 +0000 (0:00:01.158) 0:19:12.148 ********* 2026-03-24 05:09:02.704942 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704953 | orchestrator | 2026-03-24 05:09:02.704965 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:09:02.704976 | orchestrator | Tuesday 24 March 2026 05:08:32 +0000 (0:00:01.120) 0:19:13.269 ********* 2026-03-24 05:09:02.704987 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.704999 | orchestrator | 2026-03-24 05:09:02.705010 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:09:02.705022 | orchestrator | Tuesday 24 March 2026 05:08:33 +0000 (0:00:01.116) 0:19:14.385 ********* 2026-03-24 05:09:02.705033 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705044 | orchestrator | 2026-03-24 05:09:02.705056 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:09:02.705068 | orchestrator | Tuesday 24 March 2026 05:08:34 +0000 (0:00:01.109) 0:19:15.495 ********* 2026-03-24 05:09:02.705086 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705098 | orchestrator | 2026-03-24 05:09:02.705109 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:09:02.705121 | orchestrator | Tuesday 24 March 2026 05:08:35 +0000 (0:00:01.120) 0:19:16.616 ********* 2026-03-24 05:09:02.705133 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705144 | orchestrator | 2026-03-24 05:09:02.705153 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:09:02.705163 | orchestrator | Tuesday 24 March 2026 05:08:36 +0000 (0:00:01.113) 0:19:17.729 ********* 2026-03-24 05:09:02.705173 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705182 | orchestrator | 2026-03-24 05:09:02.705207 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:09:02.705217 | orchestrator | Tuesday 24 March 2026 05:08:37 +0000 (0:00:01.137) 0:19:18.867 ********* 2026-03-24 05:09:02.705236 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705246 | orchestrator | 2026-03-24 05:09:02.705256 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:09:02.705267 | orchestrator | Tuesday 24 March 2026 05:08:39 +0000 (0:00:01.107) 0:19:19.975 ********* 2026-03-24 05:09:02.705276 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705287 | orchestrator | 2026-03-24 05:09:02.705310 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:09:02.705321 | orchestrator | Tuesday 24 March 2026 05:08:40 +0000 (0:00:01.107) 0:19:21.083 ********* 2026-03-24 05:09:02.705330 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705340 | orchestrator | 2026-03-24 05:09:02.705368 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:09:02.705379 | orchestrator | Tuesday 24 March 2026 05:08:41 +0000 (0:00:01.161) 0:19:22.245 ********* 2026-03-24 05:09:02.705388 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705398 | orchestrator | 2026-03-24 05:09:02.705408 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:09:02.705417 | orchestrator | Tuesday 24 March 2026 05:08:42 +0000 (0:00:01.112) 0:19:23.357 ********* 2026-03-24 05:09:02.705427 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705437 | orchestrator | 2026-03-24 05:09:02.705446 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:09:02.705456 | orchestrator | Tuesday 24 March 2026 05:08:43 +0000 (0:00:01.111) 0:19:24.469 ********* 2026-03-24 05:09:02.705466 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705475 | orchestrator | 2026-03-24 05:09:02.705485 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:09:02.705495 | orchestrator | Tuesday 24 March 2026 05:08:44 +0000 (0:00:01.101) 0:19:25.571 ********* 2026-03-24 05:09:02.705504 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705514 | orchestrator | 2026-03-24 05:09:02.705524 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:09:02.705534 | orchestrator | Tuesday 24 March 2026 05:08:45 +0000 (0:00:01.129) 0:19:26.700 ********* 2026-03-24 05:09:02.705543 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705553 | orchestrator | 2026-03-24 05:09:02.705563 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:09:02.705572 | orchestrator | Tuesday 24 March 2026 05:08:46 +0000 (0:00:01.196) 0:19:27.897 ********* 2026-03-24 05:09:02.705582 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705591 | orchestrator | 2026-03-24 05:09:02.705601 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:09:02.705611 | orchestrator | Tuesday 24 March 2026 05:08:48 +0000 (0:00:01.124) 0:19:29.021 ********* 2026-03-24 05:09:02.705620 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705630 | orchestrator | 2026-03-24 05:09:02.705647 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:09:02.705664 | orchestrator | Tuesday 24 March 2026 05:08:49 +0000 (0:00:01.124) 0:19:30.146 ********* 2026-03-24 05:09:02.705692 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705709 | orchestrator | 2026-03-24 05:09:02.705727 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:09:02.705744 | orchestrator | Tuesday 24 March 2026 05:08:50 +0000 (0:00:01.129) 0:19:31.276 ********* 2026-03-24 05:09:02.705761 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705778 | orchestrator | 2026-03-24 05:09:02.705795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:09:02.705834 | orchestrator | Tuesday 24 March 2026 05:08:51 +0000 (0:00:01.129) 0:19:32.405 ********* 2026-03-24 05:09:02.705851 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705868 | orchestrator | 2026-03-24 05:09:02.705884 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:09:02.705899 | orchestrator | Tuesday 24 March 2026 05:08:52 +0000 (0:00:01.107) 0:19:33.512 ********* 2026-03-24 05:09:02.705915 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705930 | orchestrator | 2026-03-24 05:09:02.705946 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:09:02.705963 | orchestrator | Tuesday 24 March 2026 05:08:53 +0000 (0:00:01.220) 0:19:34.733 ********* 2026-03-24 05:09:02.705979 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.705995 | orchestrator | 2026-03-24 05:09:02.706011 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:09:02.706117 | orchestrator | Tuesday 24 March 2026 05:08:54 +0000 (0:00:01.126) 0:19:35.859 ********* 2026-03-24 05:09:02.706133 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706150 | orchestrator | 2026-03-24 05:09:02.706206 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:09:02.706224 | orchestrator | Tuesday 24 March 2026 05:08:56 +0000 (0:00:01.180) 0:19:37.040 ********* 2026-03-24 05:09:02.706241 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706258 | orchestrator | 2026-03-24 05:09:02.706276 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:09:02.706292 | orchestrator | Tuesday 24 March 2026 05:08:57 +0000 (0:00:01.094) 0:19:38.134 ********* 2026-03-24 05:09:02.706309 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706319 | orchestrator | 2026-03-24 05:09:02.706329 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:09:02.706339 | orchestrator | Tuesday 24 March 2026 05:08:58 +0000 (0:00:00.915) 0:19:39.050 ********* 2026-03-24 05:09:02.706349 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706358 | orchestrator | 2026-03-24 05:09:02.706368 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:09:02.706377 | orchestrator | Tuesday 24 March 2026 05:08:59 +0000 (0:00:00.913) 0:19:39.963 ********* 2026-03-24 05:09:02.706387 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706396 | orchestrator | 2026-03-24 05:09:02.706406 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:09:02.706416 | orchestrator | Tuesday 24 March 2026 05:08:59 +0000 (0:00:00.901) 0:19:40.865 ********* 2026-03-24 05:09:02.706425 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706435 | orchestrator | 2026-03-24 05:09:02.706445 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:09:02.706456 | orchestrator | Tuesday 24 March 2026 05:09:00 +0000 (0:00:00.930) 0:19:41.795 ********* 2026-03-24 05:09:02.706466 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706475 | orchestrator | 2026-03-24 05:09:02.706493 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:09:02.706504 | orchestrator | Tuesday 24 March 2026 05:09:01 +0000 (0:00:00.890) 0:19:42.687 ********* 2026-03-24 05:09:02.706513 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:02.706523 | orchestrator | 2026-03-24 05:09:02.706545 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:09:40.743449 | orchestrator | Tuesday 24 March 2026 05:09:02 +0000 (0:00:00.905) 0:19:43.592 ********* 2026-03-24 05:09:40.743549 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743562 | orchestrator | 2026-03-24 05:09:40.743571 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:09:40.743579 | orchestrator | Tuesday 24 March 2026 05:09:03 +0000 (0:00:01.059) 0:19:44.652 ********* 2026-03-24 05:09:40.743586 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743591 | orchestrator | 2026-03-24 05:09:40.743598 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:09:40.743605 | orchestrator | Tuesday 24 March 2026 05:09:04 +0000 (0:00:01.085) 0:19:45.738 ********* 2026-03-24 05:09:40.743611 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743617 | orchestrator | 2026-03-24 05:09:40.743624 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:09:40.743631 | orchestrator | Tuesday 24 March 2026 05:09:05 +0000 (0:00:01.119) 0:19:46.857 ********* 2026-03-24 05:09:40.743638 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743645 | orchestrator | 2026-03-24 05:09:40.743651 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:09:40.743658 | orchestrator | Tuesday 24 March 2026 05:09:07 +0000 (0:00:01.085) 0:19:47.943 ********* 2026-03-24 05:09:40.743664 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743670 | orchestrator | 2026-03-24 05:09:40.743677 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:09:40.743683 | orchestrator | Tuesday 24 March 2026 05:09:08 +0000 (0:00:01.198) 0:19:49.141 ********* 2026-03-24 05:09:40.743690 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743697 | orchestrator | 2026-03-24 05:09:40.743703 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:09:40.743710 | orchestrator | Tuesday 24 March 2026 05:09:09 +0000 (0:00:01.094) 0:19:50.236 ********* 2026-03-24 05:09:40.743716 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743723 | orchestrator | 2026-03-24 05:09:40.743730 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:09:40.743736 | orchestrator | Tuesday 24 March 2026 05:09:10 +0000 (0:00:01.237) 0:19:51.474 ********* 2026-03-24 05:09:40.743742 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743749 | orchestrator | 2026-03-24 05:09:40.743756 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:09:40.743762 | orchestrator | Tuesday 24 March 2026 05:09:11 +0000 (0:00:01.097) 0:19:52.572 ********* 2026-03-24 05:09:40.743768 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743775 | orchestrator | 2026-03-24 05:09:40.743782 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:09:40.743789 | orchestrator | Tuesday 24 March 2026 05:09:12 +0000 (0:00:01.112) 0:19:53.685 ********* 2026-03-24 05:09:40.743795 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743802 | orchestrator | 2026-03-24 05:09:40.743808 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:09:40.743814 | orchestrator | Tuesday 24 March 2026 05:09:13 +0000 (0:00:01.126) 0:19:54.811 ********* 2026-03-24 05:09:40.743820 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743825 | orchestrator | 2026-03-24 05:09:40.743831 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:09:40.743836 | orchestrator | Tuesday 24 March 2026 05:09:15 +0000 (0:00:01.194) 0:19:56.006 ********* 2026-03-24 05:09:40.743843 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743849 | orchestrator | 2026-03-24 05:09:40.743854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:09:40.743860 | orchestrator | Tuesday 24 March 2026 05:09:16 +0000 (0:00:01.145) 0:19:57.151 ********* 2026-03-24 05:09:40.743866 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.743974 | orchestrator | 2026-03-24 05:09:40.743984 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:09:40.743990 | orchestrator | Tuesday 24 March 2026 05:09:17 +0000 (0:00:01.112) 0:19:58.264 ********* 2026-03-24 05:09:40.743996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 05:09:40.744002 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 05:09:40.744008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 05:09:40.744015 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744020 | orchestrator | 2026-03-24 05:09:40.744026 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:09:40.744032 | orchestrator | Tuesday 24 March 2026 05:09:18 +0000 (0:00:01.354) 0:19:59.619 ********* 2026-03-24 05:09:40.744038 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 05:09:40.744044 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 05:09:40.744050 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 05:09:40.744055 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744061 | orchestrator | 2026-03-24 05:09:40.744067 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:09:40.744073 | orchestrator | Tuesday 24 March 2026 05:09:20 +0000 (0:00:01.709) 0:20:01.328 ********* 2026-03-24 05:09:40.744079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 05:09:40.744084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 05:09:40.744091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 05:09:40.744097 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744104 | orchestrator | 2026-03-24 05:09:40.744125 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:09:40.744131 | orchestrator | Tuesday 24 March 2026 05:09:22 +0000 (0:00:01.681) 0:20:03.010 ********* 2026-03-24 05:09:40.744137 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744143 | orchestrator | 2026-03-24 05:09:40.744148 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:09:40.744174 | orchestrator | Tuesday 24 March 2026 05:09:23 +0000 (0:00:01.084) 0:20:04.094 ********* 2026-03-24 05:09:40.744180 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-24 05:09:40.744186 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744192 | orchestrator | 2026-03-24 05:09:40.744198 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:09:40.744205 | orchestrator | Tuesday 24 March 2026 05:09:24 +0000 (0:00:01.269) 0:20:05.363 ********* 2026-03-24 05:09:40.744210 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744216 | orchestrator | 2026-03-24 05:09:40.744222 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:09:40.744228 | orchestrator | Tuesday 24 March 2026 05:09:25 +0000 (0:00:01.150) 0:20:06.514 ********* 2026-03-24 05:09:40.744234 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 05:09:40.744241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 05:09:40.744247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 05:09:40.744253 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744259 | orchestrator | 2026-03-24 05:09:40.744266 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-24 05:09:40.744272 | orchestrator | Tuesday 24 March 2026 05:09:27 +0000 (0:00:01.410) 0:20:07.924 ********* 2026-03-24 05:09:40.744278 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744284 | orchestrator | 2026-03-24 05:09:40.744291 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-24 05:09:40.744297 | orchestrator | Tuesday 24 March 2026 05:09:28 +0000 (0:00:01.120) 0:20:09.044 ********* 2026-03-24 05:09:40.744302 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744317 | orchestrator | 2026-03-24 05:09:40.744324 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-24 05:09:40.744329 | orchestrator | Tuesday 24 March 2026 05:09:29 +0000 (0:00:01.128) 0:20:10.173 ********* 2026-03-24 05:09:40.744336 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744343 | orchestrator | 2026-03-24 05:09:40.744349 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-24 05:09:40.744354 | orchestrator | Tuesday 24 March 2026 05:09:30 +0000 (0:00:01.132) 0:20:11.305 ********* 2026-03-24 05:09:40.744360 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:09:40.744366 | orchestrator | 2026-03-24 05:09:40.744372 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-24 05:09:40.744379 | orchestrator | 2026-03-24 05:09:40.744385 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-24 05:09:40.744390 | orchestrator | Tuesday 24 March 2026 05:09:31 +0000 (0:00:00.935) 0:20:12.241 ********* 2026-03-24 05:09:40.744396 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744402 | orchestrator | 2026-03-24 05:09:40.744408 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:09:40.744414 | orchestrator | Tuesday 24 March 2026 05:09:32 +0000 (0:00:00.767) 0:20:13.008 ********* 2026-03-24 05:09:40.744419 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744425 | orchestrator | 2026-03-24 05:09:40.744431 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:09:40.744437 | orchestrator | Tuesday 24 March 2026 05:09:32 +0000 (0:00:00.863) 0:20:13.872 ********* 2026-03-24 05:09:40.744443 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744449 | orchestrator | 2026-03-24 05:09:40.744455 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:09:40.744462 | orchestrator | Tuesday 24 March 2026 05:09:33 +0000 (0:00:00.825) 0:20:14.698 ********* 2026-03-24 05:09:40.744467 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744474 | orchestrator | 2026-03-24 05:09:40.744480 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:09:40.744487 | orchestrator | Tuesday 24 March 2026 05:09:34 +0000 (0:00:00.777) 0:20:15.475 ********* 2026-03-24 05:09:40.744493 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744499 | orchestrator | 2026-03-24 05:09:40.744505 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:09:40.744511 | orchestrator | Tuesday 24 March 2026 05:09:35 +0000 (0:00:00.777) 0:20:16.253 ********* 2026-03-24 05:09:40.744517 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744524 | orchestrator | 2026-03-24 05:09:40.744530 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:09:40.744536 | orchestrator | Tuesday 24 March 2026 05:09:36 +0000 (0:00:00.752) 0:20:17.005 ********* 2026-03-24 05:09:40.744542 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744548 | orchestrator | 2026-03-24 05:09:40.744554 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:09:40.744560 | orchestrator | Tuesday 24 March 2026 05:09:36 +0000 (0:00:00.762) 0:20:17.768 ********* 2026-03-24 05:09:40.744566 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744573 | orchestrator | 2026-03-24 05:09:40.744580 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:09:40.744586 | orchestrator | Tuesday 24 March 2026 05:09:37 +0000 (0:00:00.750) 0:20:18.518 ********* 2026-03-24 05:09:40.744592 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744599 | orchestrator | 2026-03-24 05:09:40.744605 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:09:40.744612 | orchestrator | Tuesday 24 March 2026 05:09:38 +0000 (0:00:00.783) 0:20:19.302 ********* 2026-03-24 05:09:40.744618 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744625 | orchestrator | 2026-03-24 05:09:40.744632 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:09:40.744639 | orchestrator | Tuesday 24 March 2026 05:09:39 +0000 (0:00:00.781) 0:20:20.084 ********* 2026-03-24 05:09:40.744653 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744660 | orchestrator | 2026-03-24 05:09:40.744674 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:09:40.744681 | orchestrator | Tuesday 24 March 2026 05:09:39 +0000 (0:00:00.759) 0:20:20.843 ********* 2026-03-24 05:09:40.744688 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:09:40.744695 | orchestrator | 2026-03-24 05:09:40.744711 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:10:11.879078 | orchestrator | Tuesday 24 March 2026 05:09:40 +0000 (0:00:00.787) 0:20:21.631 ********* 2026-03-24 05:10:11.879213 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879233 | orchestrator | 2026-03-24 05:10:11.879246 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:10:11.879258 | orchestrator | Tuesday 24 March 2026 05:09:41 +0000 (0:00:00.772) 0:20:22.404 ********* 2026-03-24 05:10:11.879269 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879280 | orchestrator | 2026-03-24 05:10:11.879292 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:10:11.879303 | orchestrator | Tuesday 24 March 2026 05:09:42 +0000 (0:00:00.766) 0:20:23.170 ********* 2026-03-24 05:10:11.879314 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879325 | orchestrator | 2026-03-24 05:10:11.879336 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:10:11.879347 | orchestrator | Tuesday 24 March 2026 05:09:43 +0000 (0:00:00.753) 0:20:23.924 ********* 2026-03-24 05:10:11.879357 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879368 | orchestrator | 2026-03-24 05:10:11.879378 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:10:11.879389 | orchestrator | Tuesday 24 March 2026 05:09:43 +0000 (0:00:00.811) 0:20:24.735 ********* 2026-03-24 05:10:11.879400 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879410 | orchestrator | 2026-03-24 05:10:11.879421 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:10:11.879432 | orchestrator | Tuesday 24 March 2026 05:09:44 +0000 (0:00:00.746) 0:20:25.481 ********* 2026-03-24 05:10:11.879442 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879453 | orchestrator | 2026-03-24 05:10:11.879464 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:10:11.879474 | orchestrator | Tuesday 24 March 2026 05:09:45 +0000 (0:00:00.791) 0:20:26.273 ********* 2026-03-24 05:10:11.879485 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879495 | orchestrator | 2026-03-24 05:10:11.879506 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:10:11.879518 | orchestrator | Tuesday 24 March 2026 05:09:46 +0000 (0:00:00.758) 0:20:27.032 ********* 2026-03-24 05:10:11.879528 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879541 | orchestrator | 2026-03-24 05:10:11.879554 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:10:11.879566 | orchestrator | Tuesday 24 March 2026 05:09:46 +0000 (0:00:00.794) 0:20:27.827 ********* 2026-03-24 05:10:11.879578 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879590 | orchestrator | 2026-03-24 05:10:11.879603 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:10:11.879615 | orchestrator | Tuesday 24 March 2026 05:09:47 +0000 (0:00:00.785) 0:20:28.613 ********* 2026-03-24 05:10:11.879627 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879639 | orchestrator | 2026-03-24 05:10:11.879651 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:10:11.879664 | orchestrator | Tuesday 24 March 2026 05:09:48 +0000 (0:00:00.779) 0:20:29.392 ********* 2026-03-24 05:10:11.879675 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879687 | orchestrator | 2026-03-24 05:10:11.879699 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:10:11.879735 | orchestrator | Tuesday 24 March 2026 05:09:49 +0000 (0:00:00.800) 0:20:30.193 ********* 2026-03-24 05:10:11.879748 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879761 | orchestrator | 2026-03-24 05:10:11.879773 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:10:11.879785 | orchestrator | Tuesday 24 March 2026 05:09:50 +0000 (0:00:00.762) 0:20:30.956 ********* 2026-03-24 05:10:11.879797 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879808 | orchestrator | 2026-03-24 05:10:11.879821 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:10:11.879833 | orchestrator | Tuesday 24 March 2026 05:09:50 +0000 (0:00:00.788) 0:20:31.744 ********* 2026-03-24 05:10:11.879844 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879856 | orchestrator | 2026-03-24 05:10:11.879868 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:10:11.879881 | orchestrator | Tuesday 24 March 2026 05:09:51 +0000 (0:00:00.811) 0:20:32.556 ********* 2026-03-24 05:10:11.879893 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879904 | orchestrator | 2026-03-24 05:10:11.879915 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:10:11.879926 | orchestrator | Tuesday 24 March 2026 05:09:52 +0000 (0:00:00.783) 0:20:33.340 ********* 2026-03-24 05:10:11.879936 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.879947 | orchestrator | 2026-03-24 05:10:11.879958 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:10:11.879999 | orchestrator | Tuesday 24 March 2026 05:09:53 +0000 (0:00:00.774) 0:20:34.114 ********* 2026-03-24 05:10:11.880011 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880022 | orchestrator | 2026-03-24 05:10:11.880032 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:10:11.880043 | orchestrator | Tuesday 24 March 2026 05:09:54 +0000 (0:00:00.786) 0:20:34.901 ********* 2026-03-24 05:10:11.880054 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880064 | orchestrator | 2026-03-24 05:10:11.880075 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:10:11.880085 | orchestrator | Tuesday 24 March 2026 05:09:54 +0000 (0:00:00.775) 0:20:35.677 ********* 2026-03-24 05:10:11.880096 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880107 | orchestrator | 2026-03-24 05:10:11.880144 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:10:11.880164 | orchestrator | Tuesday 24 March 2026 05:09:55 +0000 (0:00:00.797) 0:20:36.474 ********* 2026-03-24 05:10:11.880182 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880200 | orchestrator | 2026-03-24 05:10:11.880214 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:10:11.880242 | orchestrator | Tuesday 24 March 2026 05:09:56 +0000 (0:00:00.761) 0:20:37.235 ********* 2026-03-24 05:10:11.880254 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880265 | orchestrator | 2026-03-24 05:10:11.880275 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:10:11.880286 | orchestrator | Tuesday 24 March 2026 05:09:57 +0000 (0:00:00.765) 0:20:38.001 ********* 2026-03-24 05:10:11.880296 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880307 | orchestrator | 2026-03-24 05:10:11.880318 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:10:11.880328 | orchestrator | Tuesday 24 March 2026 05:09:57 +0000 (0:00:00.750) 0:20:38.751 ********* 2026-03-24 05:10:11.880339 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880349 | orchestrator | 2026-03-24 05:10:11.880360 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:10:11.880371 | orchestrator | Tuesday 24 March 2026 05:09:58 +0000 (0:00:00.786) 0:20:39.538 ********* 2026-03-24 05:10:11.880381 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880391 | orchestrator | 2026-03-24 05:10:11.880402 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:10:11.880423 | orchestrator | Tuesday 24 March 2026 05:09:59 +0000 (0:00:00.783) 0:20:40.321 ********* 2026-03-24 05:10:11.880433 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880444 | orchestrator | 2026-03-24 05:10:11.880455 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:10:11.880465 | orchestrator | Tuesday 24 March 2026 05:10:00 +0000 (0:00:00.774) 0:20:41.096 ********* 2026-03-24 05:10:11.880476 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880486 | orchestrator | 2026-03-24 05:10:11.880497 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:10:11.880507 | orchestrator | Tuesday 24 March 2026 05:10:00 +0000 (0:00:00.799) 0:20:41.896 ********* 2026-03-24 05:10:11.880518 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880528 | orchestrator | 2026-03-24 05:10:11.880539 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:10:11.880551 | orchestrator | Tuesday 24 March 2026 05:10:01 +0000 (0:00:00.760) 0:20:42.657 ********* 2026-03-24 05:10:11.880562 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880572 | orchestrator | 2026-03-24 05:10:11.880583 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:10:11.880594 | orchestrator | Tuesday 24 March 2026 05:10:02 +0000 (0:00:00.760) 0:20:43.417 ********* 2026-03-24 05:10:11.880604 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880615 | orchestrator | 2026-03-24 05:10:11.880626 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:10:11.880637 | orchestrator | Tuesday 24 March 2026 05:10:03 +0000 (0:00:00.755) 0:20:44.173 ********* 2026-03-24 05:10:11.880647 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880658 | orchestrator | 2026-03-24 05:10:11.880668 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:10:11.880679 | orchestrator | Tuesday 24 March 2026 05:10:04 +0000 (0:00:00.773) 0:20:44.947 ********* 2026-03-24 05:10:11.880689 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880700 | orchestrator | 2026-03-24 05:10:11.880710 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:10:11.880721 | orchestrator | Tuesday 24 March 2026 05:10:04 +0000 (0:00:00.756) 0:20:45.704 ********* 2026-03-24 05:10:11.880732 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880742 | orchestrator | 2026-03-24 05:10:11.880753 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:10:11.880764 | orchestrator | Tuesday 24 March 2026 05:10:05 +0000 (0:00:00.777) 0:20:46.482 ********* 2026-03-24 05:10:11.880774 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880785 | orchestrator | 2026-03-24 05:10:11.880795 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:10:11.880806 | orchestrator | Tuesday 24 March 2026 05:10:06 +0000 (0:00:00.768) 0:20:47.250 ********* 2026-03-24 05:10:11.880816 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880826 | orchestrator | 2026-03-24 05:10:11.880837 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:10:11.880848 | orchestrator | Tuesday 24 March 2026 05:10:07 +0000 (0:00:00.849) 0:20:48.099 ********* 2026-03-24 05:10:11.880858 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880868 | orchestrator | 2026-03-24 05:10:11.880879 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:10:11.880890 | orchestrator | Tuesday 24 March 2026 05:10:07 +0000 (0:00:00.773) 0:20:48.873 ********* 2026-03-24 05:10:11.880900 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880911 | orchestrator | 2026-03-24 05:10:11.880921 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:10:11.880932 | orchestrator | Tuesday 24 March 2026 05:10:08 +0000 (0:00:00.848) 0:20:49.722 ********* 2026-03-24 05:10:11.880942 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.880959 | orchestrator | 2026-03-24 05:10:11.880998 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:10:11.881010 | orchestrator | Tuesday 24 March 2026 05:10:09 +0000 (0:00:00.767) 0:20:50.489 ********* 2026-03-24 05:10:11.881020 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.881031 | orchestrator | 2026-03-24 05:10:11.881042 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:10:11.881054 | orchestrator | Tuesday 24 March 2026 05:10:10 +0000 (0:00:00.757) 0:20:51.247 ********* 2026-03-24 05:10:11.881071 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.881082 | orchestrator | 2026-03-24 05:10:11.881093 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:10:11.881103 | orchestrator | Tuesday 24 March 2026 05:10:11 +0000 (0:00:00.766) 0:20:52.013 ********* 2026-03-24 05:10:11.881114 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:11.881124 | orchestrator | 2026-03-24 05:10:11.881142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:10:41.577462 | orchestrator | Tuesday 24 March 2026 05:10:11 +0000 (0:00:00.755) 0:20:52.769 ********* 2026-03-24 05:10:41.577540 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577547 | orchestrator | 2026-03-24 05:10:41.577553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:10:41.577557 | orchestrator | Tuesday 24 March 2026 05:10:12 +0000 (0:00:00.762) 0:20:53.531 ********* 2026-03-24 05:10:41.577561 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577565 | orchestrator | 2026-03-24 05:10:41.577569 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:10:41.577574 | orchestrator | Tuesday 24 March 2026 05:10:13 +0000 (0:00:00.774) 0:20:54.306 ********* 2026-03-24 05:10:41.577578 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:10:41.577592 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:10:41.577596 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:10:41.577605 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577609 | orchestrator | 2026-03-24 05:10:41.577613 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:10:41.577617 | orchestrator | Tuesday 24 March 2026 05:10:14 +0000 (0:00:01.097) 0:20:55.404 ********* 2026-03-24 05:10:41.577621 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:10:41.577626 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:10:41.577629 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:10:41.577633 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577637 | orchestrator | 2026-03-24 05:10:41.577641 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:10:41.577645 | orchestrator | Tuesday 24 March 2026 05:10:15 +0000 (0:00:01.082) 0:20:56.486 ********* 2026-03-24 05:10:41.577649 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:10:41.577653 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:10:41.577657 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:10:41.577660 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577664 | orchestrator | 2026-03-24 05:10:41.577668 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:10:41.577672 | orchestrator | Tuesday 24 March 2026 05:10:16 +0000 (0:00:01.038) 0:20:57.525 ********* 2026-03-24 05:10:41.577676 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577679 | orchestrator | 2026-03-24 05:10:41.577683 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:10:41.577687 | orchestrator | Tuesday 24 March 2026 05:10:17 +0000 (0:00:00.766) 0:20:58.291 ********* 2026-03-24 05:10:41.577692 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-24 05:10:41.577712 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577716 | orchestrator | 2026-03-24 05:10:41.577720 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:10:41.577724 | orchestrator | Tuesday 24 March 2026 05:10:18 +0000 (0:00:00.880) 0:20:59.172 ********* 2026-03-24 05:10:41.577728 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577731 | orchestrator | 2026-03-24 05:10:41.577735 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:10:41.577739 | orchestrator | Tuesday 24 March 2026 05:10:19 +0000 (0:00:00.780) 0:20:59.952 ********* 2026-03-24 05:10:41.577743 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 05:10:41.577746 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 05:10:41.577750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 05:10:41.577754 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577758 | orchestrator | 2026-03-24 05:10:41.577761 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-24 05:10:41.577765 | orchestrator | Tuesday 24 March 2026 05:10:20 +0000 (0:00:01.406) 0:21:01.359 ********* 2026-03-24 05:10:41.577769 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577773 | orchestrator | 2026-03-24 05:10:41.577776 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-24 05:10:41.577780 | orchestrator | Tuesday 24 March 2026 05:10:21 +0000 (0:00:00.787) 0:21:02.147 ********* 2026-03-24 05:10:41.577784 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577788 | orchestrator | 2026-03-24 05:10:41.577791 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-24 05:10:41.577795 | orchestrator | Tuesday 24 March 2026 05:10:22 +0000 (0:00:00.759) 0:21:02.907 ********* 2026-03-24 05:10:41.577799 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577802 | orchestrator | 2026-03-24 05:10:41.577806 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-24 05:10:41.577810 | orchestrator | Tuesday 24 March 2026 05:10:22 +0000 (0:00:00.761) 0:21:03.668 ********* 2026-03-24 05:10:41.577814 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:10:41.577818 | orchestrator | 2026-03-24 05:10:41.577821 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-24 05:10:41.577825 | orchestrator | 2026-03-24 05:10:41.577829 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-24 05:10:41.577833 | orchestrator | Tuesday 24 March 2026 05:10:23 +0000 (0:00:01.010) 0:21:04.679 ********* 2026-03-24 05:10:41.577837 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577840 | orchestrator | 2026-03-24 05:10:41.577844 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:10:41.577848 | orchestrator | Tuesday 24 March 2026 05:10:24 +0000 (0:00:00.772) 0:21:05.452 ********* 2026-03-24 05:10:41.577861 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577865 | orchestrator | 2026-03-24 05:10:41.577869 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:10:41.577873 | orchestrator | Tuesday 24 March 2026 05:10:25 +0000 (0:00:00.777) 0:21:06.229 ********* 2026-03-24 05:10:41.577877 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577880 | orchestrator | 2026-03-24 05:10:41.577893 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:10:41.577897 | orchestrator | Tuesday 24 March 2026 05:10:26 +0000 (0:00:00.784) 0:21:07.014 ********* 2026-03-24 05:10:41.577901 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577906 | orchestrator | 2026-03-24 05:10:41.577909 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:10:41.577913 | orchestrator | Tuesday 24 March 2026 05:10:26 +0000 (0:00:00.768) 0:21:07.783 ********* 2026-03-24 05:10:41.577917 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577921 | orchestrator | 2026-03-24 05:10:41.577924 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:10:41.577932 | orchestrator | Tuesday 24 March 2026 05:10:27 +0000 (0:00:00.766) 0:21:08.549 ********* 2026-03-24 05:10:41.577936 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577940 | orchestrator | 2026-03-24 05:10:41.577944 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:10:41.577947 | orchestrator | Tuesday 24 March 2026 05:10:28 +0000 (0:00:00.782) 0:21:09.332 ********* 2026-03-24 05:10:41.577951 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577955 | orchestrator | 2026-03-24 05:10:41.577959 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:10:41.577962 | orchestrator | Tuesday 24 March 2026 05:10:29 +0000 (0:00:00.764) 0:21:10.096 ********* 2026-03-24 05:10:41.577966 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577970 | orchestrator | 2026-03-24 05:10:41.577974 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:10:41.577977 | orchestrator | Tuesday 24 March 2026 05:10:29 +0000 (0:00:00.768) 0:21:10.865 ********* 2026-03-24 05:10:41.577981 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.577985 | orchestrator | 2026-03-24 05:10:41.577989 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:10:41.577993 | orchestrator | Tuesday 24 March 2026 05:10:30 +0000 (0:00:00.764) 0:21:11.630 ********* 2026-03-24 05:10:41.577997 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578000 | orchestrator | 2026-03-24 05:10:41.578004 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:10:41.578008 | orchestrator | Tuesday 24 March 2026 05:10:31 +0000 (0:00:00.756) 0:21:12.387 ********* 2026-03-24 05:10:41.578085 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578091 | orchestrator | 2026-03-24 05:10:41.578095 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:10:41.578100 | orchestrator | Tuesday 24 March 2026 05:10:32 +0000 (0:00:00.789) 0:21:13.176 ********* 2026-03-24 05:10:41.578104 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578109 | orchestrator | 2026-03-24 05:10:41.578113 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:10:41.578117 | orchestrator | Tuesday 24 March 2026 05:10:33 +0000 (0:00:00.813) 0:21:13.990 ********* 2026-03-24 05:10:41.578121 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578126 | orchestrator | 2026-03-24 05:10:41.578130 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:10:41.578134 | orchestrator | Tuesday 24 March 2026 05:10:33 +0000 (0:00:00.751) 0:21:14.741 ********* 2026-03-24 05:10:41.578139 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578143 | orchestrator | 2026-03-24 05:10:41.578147 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:10:41.578151 | orchestrator | Tuesday 24 March 2026 05:10:34 +0000 (0:00:00.767) 0:21:15.509 ********* 2026-03-24 05:10:41.578156 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578160 | orchestrator | 2026-03-24 05:10:41.578164 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:10:41.578168 | orchestrator | Tuesday 24 March 2026 05:10:35 +0000 (0:00:00.803) 0:21:16.313 ********* 2026-03-24 05:10:41.578173 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578177 | orchestrator | 2026-03-24 05:10:41.578181 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:10:41.578185 | orchestrator | Tuesday 24 March 2026 05:10:36 +0000 (0:00:00.745) 0:21:17.058 ********* 2026-03-24 05:10:41.578189 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578194 | orchestrator | 2026-03-24 05:10:41.578198 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:10:41.578202 | orchestrator | Tuesday 24 March 2026 05:10:36 +0000 (0:00:00.755) 0:21:17.813 ********* 2026-03-24 05:10:41.578206 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578211 | orchestrator | 2026-03-24 05:10:41.578218 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:10:41.578229 | orchestrator | Tuesday 24 March 2026 05:10:37 +0000 (0:00:00.769) 0:21:18.583 ********* 2026-03-24 05:10:41.578236 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578242 | orchestrator | 2026-03-24 05:10:41.578248 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:10:41.578256 | orchestrator | Tuesday 24 March 2026 05:10:38 +0000 (0:00:00.778) 0:21:19.362 ********* 2026-03-24 05:10:41.578263 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578269 | orchestrator | 2026-03-24 05:10:41.578275 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:10:41.578282 | orchestrator | Tuesday 24 March 2026 05:10:39 +0000 (0:00:00.758) 0:21:20.120 ********* 2026-03-24 05:10:41.578288 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578295 | orchestrator | 2026-03-24 05:10:41.578302 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:10:41.578308 | orchestrator | Tuesday 24 March 2026 05:10:40 +0000 (0:00:00.791) 0:21:20.912 ********* 2026-03-24 05:10:41.578316 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578320 | orchestrator | 2026-03-24 05:10:41.578328 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:10:41.578333 | orchestrator | Tuesday 24 March 2026 05:10:40 +0000 (0:00:00.791) 0:21:21.703 ********* 2026-03-24 05:10:41.578337 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:10:41.578341 | orchestrator | 2026-03-24 05:10:41.578346 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:10:41.578354 | orchestrator | Tuesday 24 March 2026 05:10:41 +0000 (0:00:00.763) 0:21:22.466 ********* 2026-03-24 05:11:11.836923 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837044 | orchestrator | 2026-03-24 05:11:11.837064 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:11:11.837078 | orchestrator | Tuesday 24 March 2026 05:10:42 +0000 (0:00:00.763) 0:21:23.230 ********* 2026-03-24 05:11:11.837137 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837147 | orchestrator | 2026-03-24 05:11:11.837154 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:11:11.837161 | orchestrator | Tuesday 24 March 2026 05:10:43 +0000 (0:00:00.759) 0:21:23.990 ********* 2026-03-24 05:11:11.837169 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837175 | orchestrator | 2026-03-24 05:11:11.837182 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:11:11.837189 | orchestrator | Tuesday 24 March 2026 05:10:43 +0000 (0:00:00.767) 0:21:24.757 ********* 2026-03-24 05:11:11.837196 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837203 | orchestrator | 2026-03-24 05:11:11.837210 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:11:11.837216 | orchestrator | Tuesday 24 March 2026 05:10:44 +0000 (0:00:00.762) 0:21:25.519 ********* 2026-03-24 05:11:11.837223 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837230 | orchestrator | 2026-03-24 05:11:11.837236 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:11:11.837243 | orchestrator | Tuesday 24 March 2026 05:10:45 +0000 (0:00:00.797) 0:21:26.316 ********* 2026-03-24 05:11:11.837250 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837256 | orchestrator | 2026-03-24 05:11:11.837263 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:11:11.837270 | orchestrator | Tuesday 24 March 2026 05:10:46 +0000 (0:00:00.780) 0:21:27.097 ********* 2026-03-24 05:11:11.837277 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837284 | orchestrator | 2026-03-24 05:11:11.837291 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:11:11.837298 | orchestrator | Tuesday 24 March 2026 05:10:46 +0000 (0:00:00.775) 0:21:27.873 ********* 2026-03-24 05:11:11.837305 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837311 | orchestrator | 2026-03-24 05:11:11.837340 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:11:11.837347 | orchestrator | Tuesday 24 March 2026 05:10:47 +0000 (0:00:00.785) 0:21:28.659 ********* 2026-03-24 05:11:11.837353 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837360 | orchestrator | 2026-03-24 05:11:11.837367 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:11:11.837373 | orchestrator | Tuesday 24 March 2026 05:10:48 +0000 (0:00:00.848) 0:21:29.507 ********* 2026-03-24 05:11:11.837380 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837386 | orchestrator | 2026-03-24 05:11:11.837393 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:11:11.837400 | orchestrator | Tuesday 24 March 2026 05:10:49 +0000 (0:00:00.791) 0:21:30.298 ********* 2026-03-24 05:11:11.837406 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837413 | orchestrator | 2026-03-24 05:11:11.837420 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:11:11.837426 | orchestrator | Tuesday 24 March 2026 05:10:50 +0000 (0:00:00.785) 0:21:31.084 ********* 2026-03-24 05:11:11.837433 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837439 | orchestrator | 2026-03-24 05:11:11.837446 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:11:11.837452 | orchestrator | Tuesday 24 March 2026 05:10:50 +0000 (0:00:00.785) 0:21:31.870 ********* 2026-03-24 05:11:11.837460 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837467 | orchestrator | 2026-03-24 05:11:11.837475 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:11:11.837483 | orchestrator | Tuesday 24 March 2026 05:10:51 +0000 (0:00:00.801) 0:21:32.672 ********* 2026-03-24 05:11:11.837490 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837498 | orchestrator | 2026-03-24 05:11:11.837505 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:11:11.837513 | orchestrator | Tuesday 24 March 2026 05:10:52 +0000 (0:00:00.787) 0:21:33.459 ********* 2026-03-24 05:11:11.837530 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837538 | orchestrator | 2026-03-24 05:11:11.837554 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:11:11.837562 | orchestrator | Tuesday 24 March 2026 05:10:53 +0000 (0:00:00.755) 0:21:34.215 ********* 2026-03-24 05:11:11.837569 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837577 | orchestrator | 2026-03-24 05:11:11.837584 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:11:11.837593 | orchestrator | Tuesday 24 March 2026 05:10:54 +0000 (0:00:00.764) 0:21:34.980 ********* 2026-03-24 05:11:11.837600 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837608 | orchestrator | 2026-03-24 05:11:11.837615 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:11:11.837623 | orchestrator | Tuesday 24 March 2026 05:10:54 +0000 (0:00:00.771) 0:21:35.751 ********* 2026-03-24 05:11:11.837630 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837638 | orchestrator | 2026-03-24 05:11:11.837646 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:11:11.837653 | orchestrator | Tuesday 24 March 2026 05:10:55 +0000 (0:00:00.766) 0:21:36.517 ********* 2026-03-24 05:11:11.837661 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837668 | orchestrator | 2026-03-24 05:11:11.837688 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:11:11.837696 | orchestrator | Tuesday 24 March 2026 05:10:56 +0000 (0:00:00.759) 0:21:37.276 ********* 2026-03-24 05:11:11.837704 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837711 | orchestrator | 2026-03-24 05:11:11.837719 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:11:11.837741 | orchestrator | Tuesday 24 March 2026 05:10:57 +0000 (0:00:00.768) 0:21:38.044 ********* 2026-03-24 05:11:11.837756 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837764 | orchestrator | 2026-03-24 05:11:11.837772 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:11:11.837780 | orchestrator | Tuesday 24 March 2026 05:10:57 +0000 (0:00:00.772) 0:21:38.817 ********* 2026-03-24 05:11:11.837787 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837795 | orchestrator | 2026-03-24 05:11:11.837802 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:11:11.837810 | orchestrator | Tuesday 24 March 2026 05:10:58 +0000 (0:00:00.779) 0:21:39.596 ********* 2026-03-24 05:11:11.837817 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837823 | orchestrator | 2026-03-24 05:11:11.837830 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:11:11.837836 | orchestrator | Tuesday 24 March 2026 05:10:59 +0000 (0:00:00.855) 0:21:40.452 ********* 2026-03-24 05:11:11.837843 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837849 | orchestrator | 2026-03-24 05:11:11.837856 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:11:11.837862 | orchestrator | Tuesday 24 March 2026 05:11:00 +0000 (0:00:00.758) 0:21:41.210 ********* 2026-03-24 05:11:11.837869 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837875 | orchestrator | 2026-03-24 05:11:11.837882 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:11:11.837888 | orchestrator | Tuesday 24 March 2026 05:11:01 +0000 (0:00:00.886) 0:21:42.096 ********* 2026-03-24 05:11:11.837895 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837901 | orchestrator | 2026-03-24 05:11:11.837908 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:11:11.837914 | orchestrator | Tuesday 24 March 2026 05:11:01 +0000 (0:00:00.771) 0:21:42.868 ********* 2026-03-24 05:11:11.837920 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837927 | orchestrator | 2026-03-24 05:11:11.837934 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:11:11.837942 | orchestrator | Tuesday 24 March 2026 05:11:02 +0000 (0:00:00.758) 0:21:43.626 ********* 2026-03-24 05:11:11.837948 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837955 | orchestrator | 2026-03-24 05:11:11.837961 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:11:11.837968 | orchestrator | Tuesday 24 March 2026 05:11:03 +0000 (0:00:00.754) 0:21:44.381 ********* 2026-03-24 05:11:11.837975 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.837981 | orchestrator | 2026-03-24 05:11:11.837988 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:11:11.837994 | orchestrator | Tuesday 24 March 2026 05:11:04 +0000 (0:00:00.764) 0:21:45.145 ********* 2026-03-24 05:11:11.838001 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838009 | orchestrator | 2026-03-24 05:11:11.838083 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:11:11.838137 | orchestrator | Tuesday 24 March 2026 05:11:05 +0000 (0:00:00.808) 0:21:45.954 ********* 2026-03-24 05:11:11.838144 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838151 | orchestrator | 2026-03-24 05:11:11.838158 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:11:11.838164 | orchestrator | Tuesday 24 March 2026 05:11:05 +0000 (0:00:00.745) 0:21:46.699 ********* 2026-03-24 05:11:11.838171 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:11:11.838178 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:11:11.838184 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:11:11.838191 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838197 | orchestrator | 2026-03-24 05:11:11.838204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:11:11.838211 | orchestrator | Tuesday 24 March 2026 05:11:06 +0000 (0:00:01.003) 0:21:47.703 ********* 2026-03-24 05:11:11.838224 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:11:11.838231 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:11:11.838238 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:11:11.838244 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838251 | orchestrator | 2026-03-24 05:11:11.838258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:11:11.838264 | orchestrator | Tuesday 24 March 2026 05:11:08 +0000 (0:00:01.320) 0:21:49.024 ********* 2026-03-24 05:11:11.838271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:11:11.838278 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:11:11.838284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:11:11.838291 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838297 | orchestrator | 2026-03-24 05:11:11.838304 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:11:11.838311 | orchestrator | Tuesday 24 March 2026 05:11:09 +0000 (0:00:01.305) 0:21:50.329 ********* 2026-03-24 05:11:11.838317 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838324 | orchestrator | 2026-03-24 05:11:11.838331 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:11:11.838337 | orchestrator | Tuesday 24 March 2026 05:11:10 +0000 (0:00:00.762) 0:21:51.091 ********* 2026-03-24 05:11:11.838344 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-24 05:11:11.838351 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838357 | orchestrator | 2026-03-24 05:11:11.838369 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:11:11.838376 | orchestrator | Tuesday 24 March 2026 05:11:11 +0000 (0:00:00.861) 0:21:51.953 ********* 2026-03-24 05:11:11.838382 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:11.838389 | orchestrator | 2026-03-24 05:11:11.838396 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:11:11.838409 | orchestrator | Tuesday 24 March 2026 05:11:11 +0000 (0:00:00.771) 0:21:52.725 ********* 2026-03-24 05:11:44.957589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 05:11:44.957705 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 05:11:44.957722 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 05:11:44.957734 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:44.957746 | orchestrator | 2026-03-24 05:11:44.957759 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-24 05:11:44.957771 | orchestrator | Tuesday 24 March 2026 05:11:12 +0000 (0:00:01.019) 0:21:53.744 ********* 2026-03-24 05:11:44.957782 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:44.957793 | orchestrator | 2026-03-24 05:11:44.957804 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-24 05:11:44.957815 | orchestrator | Tuesday 24 March 2026 05:11:13 +0000 (0:00:00.764) 0:21:54.508 ********* 2026-03-24 05:11:44.957826 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:44.957837 | orchestrator | 2026-03-24 05:11:44.957848 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-24 05:11:44.957859 | orchestrator | Tuesday 24 March 2026 05:11:14 +0000 (0:00:00.748) 0:21:55.257 ********* 2026-03-24 05:11:44.957869 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:44.957880 | orchestrator | 2026-03-24 05:11:44.957891 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-24 05:11:44.957902 | orchestrator | Tuesday 24 March 2026 05:11:15 +0000 (0:00:00.784) 0:21:56.041 ********* 2026-03-24 05:11:44.957913 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:11:44.957924 | orchestrator | 2026-03-24 05:11:44.957935 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-24 05:11:44.957946 | orchestrator | 2026-03-24 05:11:44.957957 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-24 05:11:44.957994 | orchestrator | Tuesday 24 March 2026 05:11:16 +0000 (0:00:01.354) 0:21:57.396 ********* 2026-03-24 05:11:44.958006 | orchestrator | changed: [testbed-node-0] 2026-03-24 05:11:44.958073 | orchestrator | 2026-03-24 05:11:44.958085 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-24 05:11:44.958096 | orchestrator | Tuesday 24 March 2026 05:11:19 +0000 (0:00:03.004) 0:22:00.400 ********* 2026-03-24 05:11:44.958107 | orchestrator | changed: [testbed-node-0] 2026-03-24 05:11:44.958118 | orchestrator | 2026-03-24 05:11:44.958129 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:11:44.958140 | orchestrator | Tuesday 24 March 2026 05:11:22 +0000 (0:00:02.600) 0:22:03.001 ********* 2026-03-24 05:11:44.958211 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-24 05:11:44.958222 | orchestrator | 2026-03-24 05:11:44.958233 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:11:44.958244 | orchestrator | Tuesday 24 March 2026 05:11:23 +0000 (0:00:01.142) 0:22:04.143 ********* 2026-03-24 05:11:44.958255 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958268 | orchestrator | 2026-03-24 05:11:44.958290 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:11:44.958301 | orchestrator | Tuesday 24 March 2026 05:11:24 +0000 (0:00:01.498) 0:22:05.642 ********* 2026-03-24 05:11:44.958312 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958323 | orchestrator | 2026-03-24 05:11:44.958334 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:11:44.958345 | orchestrator | Tuesday 24 March 2026 05:11:25 +0000 (0:00:01.142) 0:22:06.785 ********* 2026-03-24 05:11:44.958355 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958366 | orchestrator | 2026-03-24 05:11:44.958377 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:11:44.958388 | orchestrator | Tuesday 24 March 2026 05:11:27 +0000 (0:00:01.500) 0:22:08.285 ********* 2026-03-24 05:11:44.958399 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958410 | orchestrator | 2026-03-24 05:11:44.958421 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:11:44.958432 | orchestrator | Tuesday 24 March 2026 05:11:28 +0000 (0:00:01.134) 0:22:09.420 ********* 2026-03-24 05:11:44.958442 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958453 | orchestrator | 2026-03-24 05:11:44.958464 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:11:44.958475 | orchestrator | Tuesday 24 March 2026 05:11:29 +0000 (0:00:01.132) 0:22:10.552 ********* 2026-03-24 05:11:44.958485 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958496 | orchestrator | 2026-03-24 05:11:44.958507 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:11:44.958519 | orchestrator | Tuesday 24 March 2026 05:11:30 +0000 (0:00:01.159) 0:22:11.712 ********* 2026-03-24 05:11:44.958529 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:11:44.958540 | orchestrator | 2026-03-24 05:11:44.958551 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:11:44.958562 | orchestrator | Tuesday 24 March 2026 05:11:31 +0000 (0:00:01.125) 0:22:12.837 ********* 2026-03-24 05:11:44.958573 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958583 | orchestrator | 2026-03-24 05:11:44.958594 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:11:44.958605 | orchestrator | Tuesday 24 March 2026 05:11:33 +0000 (0:00:01.098) 0:22:13.936 ********* 2026-03-24 05:11:44.958616 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:11:44.958627 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:11:44.958638 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:11:44.958648 | orchestrator | 2026-03-24 05:11:44.958675 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:11:44.958696 | orchestrator | Tuesday 24 March 2026 05:11:34 +0000 (0:00:01.897) 0:22:15.833 ********* 2026-03-24 05:11:44.958707 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:11:44.958718 | orchestrator | 2026-03-24 05:11:44.958729 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:11:44.958740 | orchestrator | Tuesday 24 March 2026 05:11:36 +0000 (0:00:01.266) 0:22:17.100 ********* 2026-03-24 05:11:44.958770 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:11:44.958782 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:11:44.958793 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:11:44.958803 | orchestrator | 2026-03-24 05:11:44.958815 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:11:44.958826 | orchestrator | Tuesday 24 March 2026 05:11:39 +0000 (0:00:03.126) 0:22:20.227 ********* 2026-03-24 05:11:44.958836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 05:11:44.958847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 05:11:44.958858 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 05:11:44.958869 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:11:44.958880 | orchestrator | 2026-03-24 05:11:44.958901 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:11:44.958912 | orchestrator | Tuesday 24 March 2026 05:11:41 +0000 (0:00:01.695) 0:22:21.922 ********* 2026-03-24 05:11:44.958925 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:11:44.958939 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:11:44.958950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:11:44.958962 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:11:44.958972 | orchestrator | 2026-03-24 05:11:44.958983 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:11:44.958994 | orchestrator | Tuesday 24 March 2026 05:11:42 +0000 (0:00:01.578) 0:22:23.502 ********* 2026-03-24 05:11:44.959007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:11:44.959020 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:11:44.959032 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:11:44.959050 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:11:44.959061 | orchestrator | 2026-03-24 05:11:44.959072 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:11:44.959083 | orchestrator | Tuesday 24 March 2026 05:11:43 +0000 (0:00:01.151) 0:22:24.653 ********* 2026-03-24 05:11:44.959102 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:11:37.030777', 'end': '2026-03-24 05:11:37.091681', 'delta': '0:00:00.060904', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:11:44.959125 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:11:37.578011', 'end': '2026-03-24 05:11:37.624996', 'delta': '0:00:00.046985', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:12:03.509741 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:11:38.156429', 'end': '2026-03-24 05:11:38.197011', 'delta': '0:00:00.040582', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:12:03.509888 | orchestrator | 2026-03-24 05:12:03.509920 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:12:03.509944 | orchestrator | Tuesday 24 March 2026 05:11:44 +0000 (0:00:01.191) 0:22:25.845 ********* 2026-03-24 05:12:03.509964 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:03.509985 | orchestrator | 2026-03-24 05:12:03.510005 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:12:03.510081 | orchestrator | Tuesday 24 March 2026 05:11:46 +0000 (0:00:01.254) 0:22:27.100 ********* 2026-03-24 05:12:03.510096 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510109 | orchestrator | 2026-03-24 05:12:03.510121 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:12:03.510133 | orchestrator | Tuesday 24 March 2026 05:11:47 +0000 (0:00:01.522) 0:22:28.623 ********* 2026-03-24 05:12:03.510144 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:03.510155 | orchestrator | 2026-03-24 05:12:03.510166 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:12:03.510177 | orchestrator | Tuesday 24 March 2026 05:11:48 +0000 (0:00:01.113) 0:22:29.736 ********* 2026-03-24 05:12:03.510212 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:03.510224 | orchestrator | 2026-03-24 05:12:03.510235 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:12:03.510271 | orchestrator | Tuesday 24 March 2026 05:11:50 +0000 (0:00:02.047) 0:22:31.783 ********* 2026-03-24 05:12:03.510286 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:03.510299 | orchestrator | 2026-03-24 05:12:03.510312 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:12:03.510324 | orchestrator | Tuesday 24 March 2026 05:11:52 +0000 (0:00:01.119) 0:22:32.903 ********* 2026-03-24 05:12:03.510337 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510350 | orchestrator | 2026-03-24 05:12:03.510362 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:12:03.510375 | orchestrator | Tuesday 24 March 2026 05:11:53 +0000 (0:00:01.106) 0:22:34.009 ********* 2026-03-24 05:12:03.510388 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510400 | orchestrator | 2026-03-24 05:12:03.510412 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:12:03.510425 | orchestrator | Tuesday 24 March 2026 05:11:54 +0000 (0:00:01.191) 0:22:35.201 ********* 2026-03-24 05:12:03.510438 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510451 | orchestrator | 2026-03-24 05:12:03.510464 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:12:03.510477 | orchestrator | Tuesday 24 March 2026 05:11:55 +0000 (0:00:01.162) 0:22:36.364 ********* 2026-03-24 05:12:03.510489 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510501 | orchestrator | 2026-03-24 05:12:03.510514 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:12:03.510527 | orchestrator | Tuesday 24 March 2026 05:11:56 +0000 (0:00:01.142) 0:22:37.507 ********* 2026-03-24 05:12:03.510539 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510552 | orchestrator | 2026-03-24 05:12:03.510563 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:12:03.510574 | orchestrator | Tuesday 24 March 2026 05:11:57 +0000 (0:00:01.114) 0:22:38.622 ********* 2026-03-24 05:12:03.510585 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510595 | orchestrator | 2026-03-24 05:12:03.510606 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:12:03.510617 | orchestrator | Tuesday 24 March 2026 05:11:58 +0000 (0:00:01.116) 0:22:39.739 ********* 2026-03-24 05:12:03.510647 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510675 | orchestrator | 2026-03-24 05:12:03.510694 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:12:03.510711 | orchestrator | Tuesday 24 March 2026 05:12:00 +0000 (0:00:01.165) 0:22:40.904 ********* 2026-03-24 05:12:03.510738 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510757 | orchestrator | 2026-03-24 05:12:03.510773 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:12:03.510792 | orchestrator | Tuesday 24 March 2026 05:12:01 +0000 (0:00:01.129) 0:22:42.034 ********* 2026-03-24 05:12:03.510810 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:03.510828 | orchestrator | 2026-03-24 05:12:03.510845 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:12:03.510864 | orchestrator | Tuesday 24 March 2026 05:12:02 +0000 (0:00:01.121) 0:22:43.156 ********* 2026-03-24 05:12:03.510902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:03.510919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:03.510943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:03.510956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:12:03.510970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:03.510981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:03.510992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:03.511024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:12:04.740076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:04.740261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:12:04.740293 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:04.740317 | orchestrator | 2026-03-24 05:12:04.740338 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:12:04.740360 | orchestrator | Tuesday 24 March 2026 05:12:03 +0000 (0:00:01.234) 0:22:44.390 ********* 2026-03-24 05:12:04.740382 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740408 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740448 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740472 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740547 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740571 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740593 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740629 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:04.740676 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:44.008447 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:12:44.008602 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.008626 | orchestrator | 2026-03-24 05:12:44.008640 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:12:44.008703 | orchestrator | Tuesday 24 March 2026 05:12:04 +0000 (0:00:01.236) 0:22:45.627 ********* 2026-03-24 05:12:44.008726 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.008746 | orchestrator | 2026-03-24 05:12:44.008764 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:12:44.008782 | orchestrator | Tuesday 24 March 2026 05:12:06 +0000 (0:00:01.517) 0:22:47.145 ********* 2026-03-24 05:12:44.008799 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.008815 | orchestrator | 2026-03-24 05:12:44.008831 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:12:44.008850 | orchestrator | Tuesday 24 March 2026 05:12:07 +0000 (0:00:01.129) 0:22:48.274 ********* 2026-03-24 05:12:44.008867 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.008885 | orchestrator | 2026-03-24 05:12:44.008903 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:12:44.008923 | orchestrator | Tuesday 24 March 2026 05:12:08 +0000 (0:00:01.465) 0:22:49.740 ********* 2026-03-24 05:12:44.008943 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.008963 | orchestrator | 2026-03-24 05:12:44.008982 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:12:44.009000 | orchestrator | Tuesday 24 March 2026 05:12:09 +0000 (0:00:01.117) 0:22:50.857 ********* 2026-03-24 05:12:44.009013 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.009027 | orchestrator | 2026-03-24 05:12:44.009040 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:12:44.009054 | orchestrator | Tuesday 24 March 2026 05:12:11 +0000 (0:00:01.259) 0:22:52.117 ********* 2026-03-24 05:12:44.009067 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.009080 | orchestrator | 2026-03-24 05:12:44.009094 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:12:44.009106 | orchestrator | Tuesday 24 March 2026 05:12:12 +0000 (0:00:01.111) 0:22:53.229 ********* 2026-03-24 05:12:44.009166 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:12:44.009180 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 05:12:44.009193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 05:12:44.009206 | orchestrator | 2026-03-24 05:12:44.009218 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:12:44.009231 | orchestrator | Tuesday 24 March 2026 05:12:14 +0000 (0:00:01.906) 0:22:55.135 ********* 2026-03-24 05:12:44.009244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 05:12:44.009382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 05:12:44.009409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 05:12:44.009427 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.009446 | orchestrator | 2026-03-24 05:12:44.009464 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:12:44.009484 | orchestrator | Tuesday 24 March 2026 05:12:15 +0000 (0:00:01.153) 0:22:56.289 ********* 2026-03-24 05:12:44.009504 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.009523 | orchestrator | 2026-03-24 05:12:44.009542 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:12:44.009554 | orchestrator | Tuesday 24 March 2026 05:12:16 +0000 (0:00:01.105) 0:22:57.394 ********* 2026-03-24 05:12:44.009564 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:12:44.009582 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:12:44.009599 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:12:44.009616 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:12:44.009634 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:12:44.009651 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:12:44.009670 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:12:44.009689 | orchestrator | 2026-03-24 05:12:44.009708 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:12:44.009721 | orchestrator | Tuesday 24 March 2026 05:12:18 +0000 (0:00:01.819) 0:22:59.214 ********* 2026-03-24 05:12:44.009732 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:12:44.009742 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:12:44.009759 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:12:44.009777 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:12:44.009824 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:12:44.009842 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:12:44.009860 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:12:44.009879 | orchestrator | 2026-03-24 05:12:44.009897 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:12:44.009916 | orchestrator | Tuesday 24 March 2026 05:12:20 +0000 (0:00:02.564) 0:23:01.779 ********* 2026-03-24 05:12:44.009936 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-24 05:12:44.009954 | orchestrator | 2026-03-24 05:12:44.009972 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:12:44.009988 | orchestrator | Tuesday 24 March 2026 05:12:22 +0000 (0:00:01.131) 0:23:02.910 ********* 2026-03-24 05:12:44.010006 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-24 05:12:44.010132 | orchestrator | 2026-03-24 05:12:44.010148 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:12:44.010159 | orchestrator | Tuesday 24 March 2026 05:12:23 +0000 (0:00:01.146) 0:23:04.056 ********* 2026-03-24 05:12:44.010170 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.010181 | orchestrator | 2026-03-24 05:12:44.010191 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:12:44.010202 | orchestrator | Tuesday 24 March 2026 05:12:24 +0000 (0:00:01.533) 0:23:05.590 ********* 2026-03-24 05:12:44.010213 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010224 | orchestrator | 2026-03-24 05:12:44.010234 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:12:44.010245 | orchestrator | Tuesday 24 March 2026 05:12:25 +0000 (0:00:01.164) 0:23:06.755 ********* 2026-03-24 05:12:44.010283 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010300 | orchestrator | 2026-03-24 05:12:44.010312 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:12:44.010322 | orchestrator | Tuesday 24 March 2026 05:12:26 +0000 (0:00:01.121) 0:23:07.877 ********* 2026-03-24 05:12:44.010333 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010344 | orchestrator | 2026-03-24 05:12:44.010355 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:12:44.010365 | orchestrator | Tuesday 24 March 2026 05:12:28 +0000 (0:00:01.120) 0:23:08.998 ********* 2026-03-24 05:12:44.010376 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.010387 | orchestrator | 2026-03-24 05:12:44.010398 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:12:44.010409 | orchestrator | Tuesday 24 March 2026 05:12:29 +0000 (0:00:01.535) 0:23:10.534 ********* 2026-03-24 05:12:44.010419 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010430 | orchestrator | 2026-03-24 05:12:44.010441 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:12:44.010461 | orchestrator | Tuesday 24 March 2026 05:12:30 +0000 (0:00:01.107) 0:23:11.642 ********* 2026-03-24 05:12:44.010472 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010483 | orchestrator | 2026-03-24 05:12:44.010494 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:12:44.010505 | orchestrator | Tuesday 24 March 2026 05:12:31 +0000 (0:00:01.128) 0:23:12.770 ********* 2026-03-24 05:12:44.010515 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.010526 | orchestrator | 2026-03-24 05:12:44.010537 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:12:44.010547 | orchestrator | Tuesday 24 March 2026 05:12:33 +0000 (0:00:01.602) 0:23:14.373 ********* 2026-03-24 05:12:44.010558 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.010569 | orchestrator | 2026-03-24 05:12:44.010580 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:12:44.010590 | orchestrator | Tuesday 24 March 2026 05:12:34 +0000 (0:00:01.520) 0:23:15.894 ********* 2026-03-24 05:12:44.010601 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010611 | orchestrator | 2026-03-24 05:12:44.010622 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:12:44.010633 | orchestrator | Tuesday 24 March 2026 05:12:36 +0000 (0:00:01.136) 0:23:17.030 ********* 2026-03-24 05:12:44.010644 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:12:44.010654 | orchestrator | 2026-03-24 05:12:44.010665 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:12:44.010676 | orchestrator | Tuesday 24 March 2026 05:12:37 +0000 (0:00:01.135) 0:23:18.166 ********* 2026-03-24 05:12:44.010686 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010697 | orchestrator | 2026-03-24 05:12:44.010708 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:12:44.010718 | orchestrator | Tuesday 24 March 2026 05:12:38 +0000 (0:00:01.131) 0:23:19.297 ********* 2026-03-24 05:12:44.010729 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010748 | orchestrator | 2026-03-24 05:12:44.010760 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:12:44.010770 | orchestrator | Tuesday 24 March 2026 05:12:39 +0000 (0:00:01.111) 0:23:20.409 ********* 2026-03-24 05:12:44.010781 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010792 | orchestrator | 2026-03-24 05:12:44.010802 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:12:44.010813 | orchestrator | Tuesday 24 March 2026 05:12:40 +0000 (0:00:01.119) 0:23:21.528 ********* 2026-03-24 05:12:44.010824 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010835 | orchestrator | 2026-03-24 05:12:44.010845 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:12:44.010856 | orchestrator | Tuesday 24 March 2026 05:12:41 +0000 (0:00:01.123) 0:23:22.651 ********* 2026-03-24 05:12:44.010867 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:12:44.010878 | orchestrator | 2026-03-24 05:12:44.010888 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:12:44.010899 | orchestrator | Tuesday 24 March 2026 05:12:42 +0000 (0:00:01.103) 0:23:23.754 ********* 2026-03-24 05:12:44.010922 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.357907 | orchestrator | 2026-03-24 05:13:32.357996 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:13:32.358006 | orchestrator | Tuesday 24 March 2026 05:12:43 +0000 (0:00:01.144) 0:23:24.899 ********* 2026-03-24 05:13:32.358070 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358078 | orchestrator | 2026-03-24 05:13:32.358084 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:13:32.358090 | orchestrator | Tuesday 24 March 2026 05:12:45 +0000 (0:00:01.203) 0:23:26.103 ********* 2026-03-24 05:13:32.358096 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358102 | orchestrator | 2026-03-24 05:13:32.358108 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:13:32.358114 | orchestrator | Tuesday 24 March 2026 05:12:46 +0000 (0:00:01.151) 0:23:27.255 ********* 2026-03-24 05:13:32.358119 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358126 | orchestrator | 2026-03-24 05:13:32.358131 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:13:32.358137 | orchestrator | Tuesday 24 March 2026 05:12:47 +0000 (0:00:01.105) 0:23:28.361 ********* 2026-03-24 05:13:32.358142 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358148 | orchestrator | 2026-03-24 05:13:32.358153 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:13:32.358158 | orchestrator | Tuesday 24 March 2026 05:12:48 +0000 (0:00:01.108) 0:23:29.470 ********* 2026-03-24 05:13:32.358164 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358169 | orchestrator | 2026-03-24 05:13:32.358174 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:13:32.358180 | orchestrator | Tuesday 24 March 2026 05:12:49 +0000 (0:00:01.142) 0:23:30.612 ********* 2026-03-24 05:13:32.358185 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358190 | orchestrator | 2026-03-24 05:13:32.358196 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:13:32.358201 | orchestrator | Tuesday 24 March 2026 05:12:50 +0000 (0:00:01.176) 0:23:31.788 ********* 2026-03-24 05:13:32.358206 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358212 | orchestrator | 2026-03-24 05:13:32.358217 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:13:32.358223 | orchestrator | Tuesday 24 March 2026 05:12:52 +0000 (0:00:01.136) 0:23:32.925 ********* 2026-03-24 05:13:32.358228 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358233 | orchestrator | 2026-03-24 05:13:32.358239 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:13:32.358244 | orchestrator | Tuesday 24 March 2026 05:12:53 +0000 (0:00:01.111) 0:23:34.036 ********* 2026-03-24 05:13:32.358249 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358273 | orchestrator | 2026-03-24 05:13:32.358279 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:13:32.358285 | orchestrator | Tuesday 24 March 2026 05:12:54 +0000 (0:00:01.092) 0:23:35.128 ********* 2026-03-24 05:13:32.358291 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358296 | orchestrator | 2026-03-24 05:13:32.358312 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:13:32.358318 | orchestrator | Tuesday 24 March 2026 05:12:55 +0000 (0:00:01.116) 0:23:36.245 ********* 2026-03-24 05:13:32.358324 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358330 | orchestrator | 2026-03-24 05:13:32.358336 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:13:32.358395 | orchestrator | Tuesday 24 March 2026 05:12:56 +0000 (0:00:01.101) 0:23:37.346 ********* 2026-03-24 05:13:32.358401 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358406 | orchestrator | 2026-03-24 05:13:32.358412 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:13:32.358417 | orchestrator | Tuesday 24 March 2026 05:12:57 +0000 (0:00:01.126) 0:23:38.473 ********* 2026-03-24 05:13:32.358422 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358436 | orchestrator | 2026-03-24 05:13:32.358442 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:13:32.358447 | orchestrator | Tuesday 24 March 2026 05:12:58 +0000 (0:00:01.115) 0:23:39.588 ********* 2026-03-24 05:13:32.358452 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358463 | orchestrator | 2026-03-24 05:13:32.358470 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:13:32.358476 | orchestrator | Tuesday 24 March 2026 05:12:59 +0000 (0:00:01.120) 0:23:40.709 ********* 2026-03-24 05:13:32.358482 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358488 | orchestrator | 2026-03-24 05:13:32.358495 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:13:32.358501 | orchestrator | Tuesday 24 March 2026 05:13:01 +0000 (0:00:02.015) 0:23:42.725 ********* 2026-03-24 05:13:32.358507 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358513 | orchestrator | 2026-03-24 05:13:32.358519 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:13:32.358525 | orchestrator | Tuesday 24 March 2026 05:13:04 +0000 (0:00:02.470) 0:23:45.195 ********* 2026-03-24 05:13:32.358531 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-24 05:13:32.358538 | orchestrator | 2026-03-24 05:13:32.358544 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:13:32.358550 | orchestrator | Tuesday 24 March 2026 05:13:05 +0000 (0:00:01.116) 0:23:46.312 ********* 2026-03-24 05:13:32.358556 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358562 | orchestrator | 2026-03-24 05:13:32.358569 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:13:32.358575 | orchestrator | Tuesday 24 March 2026 05:13:06 +0000 (0:00:01.110) 0:23:47.423 ********* 2026-03-24 05:13:32.358581 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358587 | orchestrator | 2026-03-24 05:13:32.358593 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:13:32.358599 | orchestrator | Tuesday 24 March 2026 05:13:07 +0000 (0:00:01.097) 0:23:48.521 ********* 2026-03-24 05:13:32.358618 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:13:32.358625 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:13:32.358631 | orchestrator | 2026-03-24 05:13:32.358638 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:13:32.358644 | orchestrator | Tuesday 24 March 2026 05:13:09 +0000 (0:00:01.889) 0:23:50.410 ********* 2026-03-24 05:13:32.358650 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358655 | orchestrator | 2026-03-24 05:13:32.358661 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:13:32.358677 | orchestrator | Tuesday 24 March 2026 05:13:10 +0000 (0:00:01.457) 0:23:51.867 ********* 2026-03-24 05:13:32.358683 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358689 | orchestrator | 2026-03-24 05:13:32.358695 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:13:32.358701 | orchestrator | Tuesday 24 March 2026 05:13:12 +0000 (0:00:01.115) 0:23:52.983 ********* 2026-03-24 05:13:32.358707 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358713 | orchestrator | 2026-03-24 05:13:32.358719 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:13:32.358725 | orchestrator | Tuesday 24 March 2026 05:13:13 +0000 (0:00:01.107) 0:23:54.091 ********* 2026-03-24 05:13:32.358731 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358737 | orchestrator | 2026-03-24 05:13:32.358743 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:13:32.358749 | orchestrator | Tuesday 24 March 2026 05:13:14 +0000 (0:00:01.099) 0:23:55.190 ********* 2026-03-24 05:13:32.358755 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-24 05:13:32.358761 | orchestrator | 2026-03-24 05:13:32.358767 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:13:32.358772 | orchestrator | Tuesday 24 March 2026 05:13:15 +0000 (0:00:01.115) 0:23:56.306 ********* 2026-03-24 05:13:32.358778 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358783 | orchestrator | 2026-03-24 05:13:32.358788 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:13:32.358794 | orchestrator | Tuesday 24 March 2026 05:13:17 +0000 (0:00:01.694) 0:23:58.000 ********* 2026-03-24 05:13:32.358799 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:13:32.358804 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:13:32.358809 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:13:32.358815 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358820 | orchestrator | 2026-03-24 05:13:32.358826 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:13:32.358831 | orchestrator | Tuesday 24 March 2026 05:13:18 +0000 (0:00:01.115) 0:23:59.116 ********* 2026-03-24 05:13:32.358836 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358842 | orchestrator | 2026-03-24 05:13:32.358851 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:13:32.358857 | orchestrator | Tuesday 24 March 2026 05:13:19 +0000 (0:00:01.141) 0:24:00.257 ********* 2026-03-24 05:13:32.358862 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358868 | orchestrator | 2026-03-24 05:13:32.358873 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:13:32.358878 | orchestrator | Tuesday 24 March 2026 05:13:20 +0000 (0:00:01.166) 0:24:01.424 ********* 2026-03-24 05:13:32.358884 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358889 | orchestrator | 2026-03-24 05:13:32.358894 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:13:32.358900 | orchestrator | Tuesday 24 March 2026 05:13:21 +0000 (0:00:01.174) 0:24:02.598 ********* 2026-03-24 05:13:32.358905 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358910 | orchestrator | 2026-03-24 05:13:32.358916 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:13:32.358921 | orchestrator | Tuesday 24 March 2026 05:13:22 +0000 (0:00:01.120) 0:24:03.719 ********* 2026-03-24 05:13:32.358926 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.358932 | orchestrator | 2026-03-24 05:13:32.358937 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:13:32.358942 | orchestrator | Tuesday 24 March 2026 05:13:23 +0000 (0:00:01.117) 0:24:04.836 ********* 2026-03-24 05:13:32.358948 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358958 | orchestrator | 2026-03-24 05:13:32.358963 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:13:32.358969 | orchestrator | Tuesday 24 March 2026 05:13:26 +0000 (0:00:02.648) 0:24:07.485 ********* 2026-03-24 05:13:32.358977 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:13:32.358986 | orchestrator | 2026-03-24 05:13:32.358994 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:13:32.358999 | orchestrator | Tuesday 24 March 2026 05:13:27 +0000 (0:00:01.104) 0:24:08.590 ********* 2026-03-24 05:13:32.359007 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-24 05:13:32.359015 | orchestrator | 2026-03-24 05:13:32.359020 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:13:32.359028 | orchestrator | Tuesday 24 March 2026 05:13:28 +0000 (0:00:01.195) 0:24:09.786 ********* 2026-03-24 05:13:32.359036 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.359042 | orchestrator | 2026-03-24 05:13:32.359052 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:13:32.359058 | orchestrator | Tuesday 24 March 2026 05:13:30 +0000 (0:00:01.202) 0:24:10.989 ********* 2026-03-24 05:13:32.359063 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.359068 | orchestrator | 2026-03-24 05:13:32.359073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:13:32.359079 | orchestrator | Tuesday 24 March 2026 05:13:31 +0000 (0:00:01.135) 0:24:12.124 ********* 2026-03-24 05:13:32.359084 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:13:32.359090 | orchestrator | 2026-03-24 05:13:32.359099 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:14:15.133936 | orchestrator | Tuesday 24 March 2026 05:13:32 +0000 (0:00:01.119) 0:24:13.244 ********* 2026-03-24 05:14:15.134105 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134126 | orchestrator | 2026-03-24 05:14:15.134169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:14:15.134183 | orchestrator | Tuesday 24 March 2026 05:13:33 +0000 (0:00:01.125) 0:24:14.370 ********* 2026-03-24 05:14:15.134194 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134206 | orchestrator | 2026-03-24 05:14:15.134218 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:14:15.134230 | orchestrator | Tuesday 24 March 2026 05:13:34 +0000 (0:00:01.112) 0:24:15.482 ********* 2026-03-24 05:14:15.134241 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134253 | orchestrator | 2026-03-24 05:14:15.134265 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:14:15.134276 | orchestrator | Tuesday 24 March 2026 05:13:35 +0000 (0:00:01.168) 0:24:16.650 ********* 2026-03-24 05:14:15.134288 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134300 | orchestrator | 2026-03-24 05:14:15.134311 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:14:15.134323 | orchestrator | Tuesday 24 March 2026 05:13:36 +0000 (0:00:01.114) 0:24:17.765 ********* 2026-03-24 05:14:15.134334 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134345 | orchestrator | 2026-03-24 05:14:15.134356 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:14:15.134368 | orchestrator | Tuesday 24 March 2026 05:13:37 +0000 (0:00:01.125) 0:24:18.890 ********* 2026-03-24 05:14:15.134379 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:14:15.134391 | orchestrator | 2026-03-24 05:14:15.134402 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:14:15.134438 | orchestrator | Tuesday 24 March 2026 05:13:39 +0000 (0:00:01.161) 0:24:20.052 ********* 2026-03-24 05:14:15.134451 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-24 05:14:15.134463 | orchestrator | 2026-03-24 05:14:15.134475 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:14:15.134516 | orchestrator | Tuesday 24 March 2026 05:13:40 +0000 (0:00:01.127) 0:24:21.180 ********* 2026-03-24 05:14:15.134525 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-24 05:14:15.134533 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-24 05:14:15.134541 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-24 05:14:15.134549 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-24 05:14:15.134556 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-24 05:14:15.134564 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-24 05:14:15.134571 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-24 05:14:15.134579 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:14:15.134587 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:14:15.134595 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:14:15.134603 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:14:15.134610 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:14:15.134618 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:14:15.134626 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:14:15.134634 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-24 05:14:15.134641 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-24 05:14:15.134648 | orchestrator | 2026-03-24 05:14:15.134655 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:14:15.134661 | orchestrator | Tuesday 24 March 2026 05:13:47 +0000 (0:00:06.860) 0:24:28.040 ********* 2026-03-24 05:14:15.134668 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134675 | orchestrator | 2026-03-24 05:14:15.134682 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:14:15.134688 | orchestrator | Tuesday 24 March 2026 05:13:48 +0000 (0:00:01.125) 0:24:29.165 ********* 2026-03-24 05:14:15.134695 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134702 | orchestrator | 2026-03-24 05:14:15.134708 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:14:15.134715 | orchestrator | Tuesday 24 March 2026 05:13:49 +0000 (0:00:01.143) 0:24:30.309 ********* 2026-03-24 05:14:15.134721 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134728 | orchestrator | 2026-03-24 05:14:15.134735 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:14:15.134741 | orchestrator | Tuesday 24 March 2026 05:13:50 +0000 (0:00:01.139) 0:24:31.448 ********* 2026-03-24 05:14:15.134748 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134754 | orchestrator | 2026-03-24 05:14:15.134796 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:14:15.134803 | orchestrator | Tuesday 24 March 2026 05:13:51 +0000 (0:00:01.091) 0:24:32.539 ********* 2026-03-24 05:14:15.134810 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134816 | orchestrator | 2026-03-24 05:14:15.134823 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:14:15.134830 | orchestrator | Tuesday 24 March 2026 05:13:52 +0000 (0:00:01.132) 0:24:33.671 ********* 2026-03-24 05:14:15.134836 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134843 | orchestrator | 2026-03-24 05:14:15.134849 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:14:15.134856 | orchestrator | Tuesday 24 March 2026 05:13:53 +0000 (0:00:01.131) 0:24:34.803 ********* 2026-03-24 05:14:15.134863 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134869 | orchestrator | 2026-03-24 05:14:15.134897 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:14:15.134909 | orchestrator | Tuesday 24 March 2026 05:13:55 +0000 (0:00:01.115) 0:24:35.918 ********* 2026-03-24 05:14:15.134929 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134941 | orchestrator | 2026-03-24 05:14:15.134953 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:14:15.134965 | orchestrator | Tuesday 24 March 2026 05:13:56 +0000 (0:00:01.107) 0:24:37.026 ********* 2026-03-24 05:14:15.134976 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.134987 | orchestrator | 2026-03-24 05:14:15.134999 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:14:15.135010 | orchestrator | Tuesday 24 March 2026 05:13:57 +0000 (0:00:01.100) 0:24:38.126 ********* 2026-03-24 05:14:15.135022 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135033 | orchestrator | 2026-03-24 05:14:15.135045 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:14:15.135056 | orchestrator | Tuesday 24 March 2026 05:13:58 +0000 (0:00:01.100) 0:24:39.226 ********* 2026-03-24 05:14:15.135068 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135079 | orchestrator | 2026-03-24 05:14:15.135089 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:14:15.135101 | orchestrator | Tuesday 24 March 2026 05:13:59 +0000 (0:00:01.113) 0:24:40.341 ********* 2026-03-24 05:14:15.135112 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135123 | orchestrator | 2026-03-24 05:14:15.135134 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:14:15.135146 | orchestrator | Tuesday 24 March 2026 05:14:00 +0000 (0:00:01.144) 0:24:41.485 ********* 2026-03-24 05:14:15.135157 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135169 | orchestrator | 2026-03-24 05:14:15.135180 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:14:15.135191 | orchestrator | Tuesday 24 March 2026 05:14:01 +0000 (0:00:01.186) 0:24:42.672 ********* 2026-03-24 05:14:15.135202 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135215 | orchestrator | 2026-03-24 05:14:15.135227 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:14:15.135238 | orchestrator | Tuesday 24 March 2026 05:14:02 +0000 (0:00:01.120) 0:24:43.793 ********* 2026-03-24 05:14:15.135250 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135261 | orchestrator | 2026-03-24 05:14:15.135272 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:14:15.135283 | orchestrator | Tuesday 24 March 2026 05:14:04 +0000 (0:00:01.242) 0:24:45.035 ********* 2026-03-24 05:14:15.135294 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135306 | orchestrator | 2026-03-24 05:14:15.135317 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:14:15.135328 | orchestrator | Tuesday 24 March 2026 05:14:05 +0000 (0:00:01.166) 0:24:46.202 ********* 2026-03-24 05:14:15.135344 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135354 | orchestrator | 2026-03-24 05:14:15.135367 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:14:15.135380 | orchestrator | Tuesday 24 March 2026 05:14:06 +0000 (0:00:01.119) 0:24:47.322 ********* 2026-03-24 05:14:15.135391 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135402 | orchestrator | 2026-03-24 05:14:15.135435 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:14:15.135446 | orchestrator | Tuesday 24 March 2026 05:14:07 +0000 (0:00:01.121) 0:24:48.443 ********* 2026-03-24 05:14:15.135457 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135469 | orchestrator | 2026-03-24 05:14:15.135481 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:14:15.135492 | orchestrator | Tuesday 24 March 2026 05:14:08 +0000 (0:00:01.137) 0:24:49.581 ********* 2026-03-24 05:14:15.135503 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135514 | orchestrator | 2026-03-24 05:14:15.135525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:14:15.135545 | orchestrator | Tuesday 24 March 2026 05:14:09 +0000 (0:00:01.150) 0:24:50.731 ********* 2026-03-24 05:14:15.135556 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135563 | orchestrator | 2026-03-24 05:14:15.135569 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:14:15.135576 | orchestrator | Tuesday 24 March 2026 05:14:10 +0000 (0:00:01.132) 0:24:51.864 ********* 2026-03-24 05:14:15.135583 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 05:14:15.135589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 05:14:15.135596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 05:14:15.135603 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135609 | orchestrator | 2026-03-24 05:14:15.135616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:14:15.135622 | orchestrator | Tuesday 24 March 2026 05:14:12 +0000 (0:00:01.385) 0:24:53.249 ********* 2026-03-24 05:14:15.135629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 05:14:15.135635 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 05:14:15.135642 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 05:14:15.135652 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135659 | orchestrator | 2026-03-24 05:14:15.135666 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:14:15.135672 | orchestrator | Tuesday 24 March 2026 05:14:13 +0000 (0:00:01.359) 0:24:54.609 ********* 2026-03-24 05:14:15.135679 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-24 05:14:15.135685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 05:14:15.135692 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 05:14:15.135698 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:14:15.135705 | orchestrator | 2026-03-24 05:14:15.135718 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:15:18.760837 | orchestrator | Tuesday 24 March 2026 05:14:15 +0000 (0:00:01.409) 0:24:56.019 ********* 2026-03-24 05:15:18.760927 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.760938 | orchestrator | 2026-03-24 05:15:18.760946 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:15:18.760953 | orchestrator | Tuesday 24 March 2026 05:14:16 +0000 (0:00:01.161) 0:24:57.181 ********* 2026-03-24 05:15:18.760960 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-24 05:15:18.760966 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.760973 | orchestrator | 2026-03-24 05:15:18.760979 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:15:18.760985 | orchestrator | Tuesday 24 March 2026 05:14:17 +0000 (0:00:01.279) 0:24:58.461 ********* 2026-03-24 05:15:18.760992 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:15:18.760998 | orchestrator | 2026-03-24 05:15:18.761005 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:15:18.761011 | orchestrator | Tuesday 24 March 2026 05:14:19 +0000 (0:00:01.812) 0:25:00.274 ********* 2026-03-24 05:15:18.761018 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:15:18.761024 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:15:18.761031 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:15:18.761037 | orchestrator | 2026-03-24 05:15:18.761044 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-24 05:15:18.761050 | orchestrator | Tuesday 24 March 2026 05:14:21 +0000 (0:00:01.655) 0:25:01.929 ********* 2026-03-24 05:15:18.761056 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-03-24 05:15:18.761062 | orchestrator | 2026-03-24 05:15:18.761069 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-24 05:15:18.761093 | orchestrator | Tuesday 24 March 2026 05:14:22 +0000 (0:00:01.471) 0:25:03.400 ********* 2026-03-24 05:15:18.761100 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:15:18.761106 | orchestrator | 2026-03-24 05:15:18.761112 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-24 05:15:18.761118 | orchestrator | Tuesday 24 March 2026 05:14:24 +0000 (0:00:01.525) 0:25:04.926 ********* 2026-03-24 05:15:18.761125 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.761131 | orchestrator | 2026-03-24 05:15:18.761137 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-24 05:15:18.761143 | orchestrator | Tuesday 24 March 2026 05:14:25 +0000 (0:00:01.114) 0:25:06.041 ********* 2026-03-24 05:15:18.761149 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-24 05:15:18.761156 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-24 05:15:18.761162 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-24 05:15:18.761181 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-24 05:15:18.761187 | orchestrator | 2026-03-24 05:15:18.761193 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-24 05:15:18.761200 | orchestrator | Tuesday 24 March 2026 05:14:32 +0000 (0:00:07.634) 0:25:13.676 ********* 2026-03-24 05:15:18.761206 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:15:18.761212 | orchestrator | 2026-03-24 05:15:18.761218 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-24 05:15:18.761224 | orchestrator | Tuesday 24 March 2026 05:14:33 +0000 (0:00:01.218) 0:25:14.894 ********* 2026-03-24 05:15:18.761230 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-24 05:15:18.761236 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 05:15:18.761243 | orchestrator | 2026-03-24 05:15:18.761249 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:15:18.761255 | orchestrator | Tuesday 24 March 2026 05:14:37 +0000 (0:00:03.262) 0:25:18.156 ********* 2026-03-24 05:15:18.761261 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-24 05:15:18.761267 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-24 05:15:18.761274 | orchestrator | 2026-03-24 05:15:18.761280 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-24 05:15:18.761286 | orchestrator | Tuesday 24 March 2026 05:14:39 +0000 (0:00:01.978) 0:25:20.135 ********* 2026-03-24 05:15:18.761292 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:15:18.761298 | orchestrator | 2026-03-24 05:15:18.761304 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-24 05:15:18.761311 | orchestrator | Tuesday 24 March 2026 05:14:40 +0000 (0:00:01.526) 0:25:21.662 ********* 2026-03-24 05:15:18.761317 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.761323 | orchestrator | 2026-03-24 05:15:18.761329 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-24 05:15:18.761335 | orchestrator | Tuesday 24 March 2026 05:14:41 +0000 (0:00:01.078) 0:25:22.741 ********* 2026-03-24 05:15:18.761341 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.761347 | orchestrator | 2026-03-24 05:15:18.761353 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-24 05:15:18.761360 | orchestrator | Tuesday 24 March 2026 05:14:42 +0000 (0:00:01.101) 0:25:23.842 ********* 2026-03-24 05:15:18.761366 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-03-24 05:15:18.761372 | orchestrator | 2026-03-24 05:15:18.761379 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-24 05:15:18.761385 | orchestrator | Tuesday 24 March 2026 05:14:44 +0000 (0:00:01.425) 0:25:25.268 ********* 2026-03-24 05:15:18.761391 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.761398 | orchestrator | 2026-03-24 05:15:18.761405 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-24 05:15:18.761412 | orchestrator | Tuesday 24 March 2026 05:14:45 +0000 (0:00:01.146) 0:25:26.414 ********* 2026-03-24 05:15:18.761425 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.761432 | orchestrator | 2026-03-24 05:15:18.761439 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-24 05:15:18.761457 | orchestrator | Tuesday 24 March 2026 05:14:46 +0000 (0:00:01.124) 0:25:27.539 ********* 2026-03-24 05:15:18.761469 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-03-24 05:15:18.761480 | orchestrator | 2026-03-24 05:15:18.761492 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-24 05:15:18.761526 | orchestrator | Tuesday 24 March 2026 05:14:48 +0000 (0:00:01.432) 0:25:28.971 ********* 2026-03-24 05:15:18.761537 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:15:18.761547 | orchestrator | 2026-03-24 05:15:18.761558 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-24 05:15:18.761569 | orchestrator | Tuesday 24 March 2026 05:14:50 +0000 (0:00:02.154) 0:25:31.125 ********* 2026-03-24 05:15:18.761579 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:15:18.761591 | orchestrator | 2026-03-24 05:15:18.761599 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-24 05:15:18.761606 | orchestrator | Tuesday 24 March 2026 05:14:52 +0000 (0:00:01.995) 0:25:33.121 ********* 2026-03-24 05:15:18.761613 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:15:18.761620 | orchestrator | 2026-03-24 05:15:18.761628 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-24 05:15:18.761635 | orchestrator | Tuesday 24 March 2026 05:14:54 +0000 (0:00:02.590) 0:25:35.712 ********* 2026-03-24 05:15:18.761642 | orchestrator | changed: [testbed-node-0] 2026-03-24 05:15:18.761649 | orchestrator | 2026-03-24 05:15:18.761656 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-24 05:15:18.761664 | orchestrator | Tuesday 24 March 2026 05:14:58 +0000 (0:00:04.084) 0:25:39.797 ********* 2026-03-24 05:15:18.761671 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:15:18.761677 | orchestrator | 2026-03-24 05:15:18.761683 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-24 05:15:18.761689 | orchestrator | 2026-03-24 05:15:18.761695 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-24 05:15:18.761701 | orchestrator | Tuesday 24 March 2026 05:15:00 +0000 (0:00:01.232) 0:25:41.029 ********* 2026-03-24 05:15:18.761707 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:15:18.761713 | orchestrator | 2026-03-24 05:15:18.761720 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-24 05:15:18.761726 | orchestrator | Tuesday 24 March 2026 05:15:02 +0000 (0:00:02.778) 0:25:43.808 ********* 2026-03-24 05:15:18.761732 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:15:18.761738 | orchestrator | 2026-03-24 05:15:18.761744 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:15:18.761750 | orchestrator | Tuesday 24 March 2026 05:15:05 +0000 (0:00:02.181) 0:25:45.990 ********* 2026-03-24 05:15:18.761756 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-24 05:15:18.761762 | orchestrator | 2026-03-24 05:15:18.761768 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:15:18.761779 | orchestrator | Tuesday 24 March 2026 05:15:06 +0000 (0:00:01.107) 0:25:47.098 ********* 2026-03-24 05:15:18.761785 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.761791 | orchestrator | 2026-03-24 05:15:18.761798 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:15:18.761804 | orchestrator | Tuesday 24 March 2026 05:15:07 +0000 (0:00:01.455) 0:25:48.554 ********* 2026-03-24 05:15:18.761810 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.761816 | orchestrator | 2026-03-24 05:15:18.761822 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:15:18.761828 | orchestrator | Tuesday 24 March 2026 05:15:08 +0000 (0:00:01.128) 0:25:49.682 ********* 2026-03-24 05:15:18.761834 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.761840 | orchestrator | 2026-03-24 05:15:18.761851 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:15:18.761858 | orchestrator | Tuesday 24 March 2026 05:15:10 +0000 (0:00:01.455) 0:25:51.137 ********* 2026-03-24 05:15:18.761864 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.761870 | orchestrator | 2026-03-24 05:15:18.761876 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:15:18.761882 | orchestrator | Tuesday 24 March 2026 05:15:11 +0000 (0:00:01.119) 0:25:52.257 ********* 2026-03-24 05:15:18.761888 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.761894 | orchestrator | 2026-03-24 05:15:18.761900 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:15:18.761906 | orchestrator | Tuesday 24 March 2026 05:15:12 +0000 (0:00:01.146) 0:25:53.404 ********* 2026-03-24 05:15:18.761912 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.761918 | orchestrator | 2026-03-24 05:15:18.761924 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:15:18.761930 | orchestrator | Tuesday 24 March 2026 05:15:13 +0000 (0:00:01.126) 0:25:54.531 ********* 2026-03-24 05:15:18.761936 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:18.761943 | orchestrator | 2026-03-24 05:15:18.761949 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:15:18.761955 | orchestrator | Tuesday 24 March 2026 05:15:14 +0000 (0:00:01.116) 0:25:55.647 ********* 2026-03-24 05:15:18.761961 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.761967 | orchestrator | 2026-03-24 05:15:18.761973 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:15:18.761979 | orchestrator | Tuesday 24 March 2026 05:15:15 +0000 (0:00:01.142) 0:25:56.789 ********* 2026-03-24 05:15:18.761985 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:15:18.761991 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:15:18.761997 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:15:18.762004 | orchestrator | 2026-03-24 05:15:18.762010 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:15:18.762056 | orchestrator | Tuesday 24 March 2026 05:15:17 +0000 (0:00:01.625) 0:25:58.415 ********* 2026-03-24 05:15:18.762063 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:18.762069 | orchestrator | 2026-03-24 05:15:18.762076 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:15:18.762088 | orchestrator | Tuesday 24 March 2026 05:15:18 +0000 (0:00:01.230) 0:25:59.646 ********* 2026-03-24 05:15:42.447904 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:15:42.448022 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:15:42.448042 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:15:42.448059 | orchestrator | 2026-03-24 05:15:42.448075 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:15:42.448091 | orchestrator | Tuesday 24 March 2026 05:15:21 +0000 (0:00:02.914) 0:26:02.561 ********* 2026-03-24 05:15:42.448106 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 05:15:42.448122 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 05:15:42.448136 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 05:15:42.448152 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448167 | orchestrator | 2026-03-24 05:15:42.448183 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:15:42.448200 | orchestrator | Tuesday 24 March 2026 05:15:23 +0000 (0:00:01.414) 0:26:03.976 ********* 2026-03-24 05:15:42.448219 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:15:42.448263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:15:42.448281 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:15:42.448296 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448311 | orchestrator | 2026-03-24 05:15:42.448327 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:15:42.448344 | orchestrator | Tuesday 24 March 2026 05:15:24 +0000 (0:00:01.566) 0:26:05.542 ********* 2026-03-24 05:15:42.448377 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:42.448390 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:42.448399 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:42.448408 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448418 | orchestrator | 2026-03-24 05:15:42.448428 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:15:42.448438 | orchestrator | Tuesday 24 March 2026 05:15:25 +0000 (0:00:01.171) 0:26:06.714 ********* 2026-03-24 05:15:42.448450 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:15:19.290702', 'end': '2026-03-24 05:15:19.339535', 'delta': '0:00:00.048833', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:15:42.448481 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:15:19.849519', 'end': '2026-03-24 05:15:19.893181', 'delta': '0:00:00.043662', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:15:42.448501 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:15:20.446686', 'end': '2026-03-24 05:15:20.503857', 'delta': '0:00:00.057171', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:15:42.448511 | orchestrator | 2026-03-24 05:15:42.448521 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:15:42.448576 | orchestrator | Tuesday 24 March 2026 05:15:26 +0000 (0:00:01.170) 0:26:07.884 ********* 2026-03-24 05:15:42.448587 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:42.448597 | orchestrator | 2026-03-24 05:15:42.448607 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:15:42.448617 | orchestrator | Tuesday 24 March 2026 05:15:28 +0000 (0:00:01.226) 0:26:09.111 ********* 2026-03-24 05:15:42.448627 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448637 | orchestrator | 2026-03-24 05:15:42.448647 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:15:42.448662 | orchestrator | Tuesday 24 March 2026 05:15:29 +0000 (0:00:01.230) 0:26:10.342 ********* 2026-03-24 05:15:42.448672 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:42.448682 | orchestrator | 2026-03-24 05:15:42.448692 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:15:42.448701 | orchestrator | Tuesday 24 March 2026 05:15:30 +0000 (0:00:01.151) 0:26:11.494 ********* 2026-03-24 05:15:42.448712 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:15:42.448722 | orchestrator | 2026-03-24 05:15:42.448732 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:15:42.448742 | orchestrator | Tuesday 24 March 2026 05:15:32 +0000 (0:00:01.932) 0:26:13.426 ********* 2026-03-24 05:15:42.448752 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:42.448762 | orchestrator | 2026-03-24 05:15:42.448772 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:15:42.448780 | orchestrator | Tuesday 24 March 2026 05:15:33 +0000 (0:00:01.125) 0:26:14.552 ********* 2026-03-24 05:15:42.448789 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448798 | orchestrator | 2026-03-24 05:15:42.448806 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:15:42.448815 | orchestrator | Tuesday 24 March 2026 05:15:34 +0000 (0:00:01.159) 0:26:15.711 ********* 2026-03-24 05:15:42.448823 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448832 | orchestrator | 2026-03-24 05:15:42.448840 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:15:42.448849 | orchestrator | Tuesday 24 March 2026 05:15:35 +0000 (0:00:01.187) 0:26:16.899 ********* 2026-03-24 05:15:42.448858 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448866 | orchestrator | 2026-03-24 05:15:42.448875 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:15:42.448884 | orchestrator | Tuesday 24 March 2026 05:15:37 +0000 (0:00:01.014) 0:26:17.914 ********* 2026-03-24 05:15:42.448892 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448901 | orchestrator | 2026-03-24 05:15:42.448910 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:15:42.448918 | orchestrator | Tuesday 24 March 2026 05:15:38 +0000 (0:00:01.064) 0:26:18.979 ********* 2026-03-24 05:15:42.448927 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448935 | orchestrator | 2026-03-24 05:15:42.448944 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:15:42.448960 | orchestrator | Tuesday 24 March 2026 05:15:39 +0000 (0:00:01.081) 0:26:20.060 ********* 2026-03-24 05:15:42.448969 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.448978 | orchestrator | 2026-03-24 05:15:42.448986 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:15:42.448995 | orchestrator | Tuesday 24 March 2026 05:15:40 +0000 (0:00:01.126) 0:26:21.187 ********* 2026-03-24 05:15:42.449004 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.449012 | orchestrator | 2026-03-24 05:15:42.449021 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:15:42.449030 | orchestrator | Tuesday 24 March 2026 05:15:41 +0000 (0:00:01.073) 0:26:22.261 ********* 2026-03-24 05:15:42.449038 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:42.449047 | orchestrator | 2026-03-24 05:15:42.449056 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:15:42.449071 | orchestrator | Tuesday 24 March 2026 05:15:42 +0000 (0:00:01.076) 0:26:23.338 ********* 2026-03-24 05:15:45.985499 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:45.985666 | orchestrator | 2026-03-24 05:15:45.985684 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:15:45.985698 | orchestrator | Tuesday 24 March 2026 05:15:43 +0000 (0:00:01.078) 0:26:24.417 ********* 2026-03-24 05:15:45.985714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:15:45.985790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6bbbff7c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:15:45.985924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:15:45.985953 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:45.985965 | orchestrator | 2026-03-24 05:15:45.985976 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:15:45.985989 | orchestrator | Tuesday 24 March 2026 05:15:44 +0000 (0:00:01.231) 0:26:25.649 ********* 2026-03-24 05:15:45.986002 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:45.986082 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:45.986109 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570396 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-35-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570486 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570507 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570514 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570597 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6bbbff7c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1', 'scsi-SQEMU_QEMU_HARDDISK_6bbbff7c-b34f-46ab-9339-96e122f5aec5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570607 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570618 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:15:55.570631 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:55.570639 | orchestrator | 2026-03-24 05:15:55.570646 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:15:55.570654 | orchestrator | Tuesday 24 March 2026 05:15:45 +0000 (0:00:01.227) 0:26:26.876 ********* 2026-03-24 05:15:55.570660 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:55.570667 | orchestrator | 2026-03-24 05:15:55.570674 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:15:55.570680 | orchestrator | Tuesday 24 March 2026 05:15:47 +0000 (0:00:01.494) 0:26:28.370 ********* 2026-03-24 05:15:55.570686 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:55.570693 | orchestrator | 2026-03-24 05:15:55.570699 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:15:55.570705 | orchestrator | Tuesday 24 March 2026 05:15:48 +0000 (0:00:01.119) 0:26:29.489 ********* 2026-03-24 05:15:55.570711 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:15:55.570717 | orchestrator | 2026-03-24 05:15:55.570724 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:15:55.570730 | orchestrator | Tuesday 24 March 2026 05:15:50 +0000 (0:00:01.499) 0:26:30.989 ********* 2026-03-24 05:15:55.570737 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:55.570743 | orchestrator | 2026-03-24 05:15:55.570749 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:15:55.570755 | orchestrator | Tuesday 24 March 2026 05:15:51 +0000 (0:00:01.034) 0:26:32.024 ********* 2026-03-24 05:15:55.570761 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:55.570768 | orchestrator | 2026-03-24 05:15:55.570774 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:15:55.570780 | orchestrator | Tuesday 24 March 2026 05:15:52 +0000 (0:00:00.959) 0:26:32.983 ********* 2026-03-24 05:15:55.570786 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:55.570792 | orchestrator | 2026-03-24 05:15:55.570798 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:15:55.570804 | orchestrator | Tuesday 24 March 2026 05:15:52 +0000 (0:00:00.899) 0:26:33.883 ********* 2026-03-24 05:15:55.570811 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-24 05:15:55.570817 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:15:55.570823 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-24 05:15:55.570830 | orchestrator | 2026-03-24 05:15:55.570836 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:15:55.570842 | orchestrator | Tuesday 24 March 2026 05:15:54 +0000 (0:00:01.468) 0:26:35.352 ********* 2026-03-24 05:15:55.570848 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-24 05:15:55.570855 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-24 05:15:55.570861 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-24 05:15:55.570867 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:15:55.570874 | orchestrator | 2026-03-24 05:15:55.570884 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:16:30.789956 | orchestrator | Tuesday 24 March 2026 05:15:55 +0000 (0:00:01.115) 0:26:36.467 ********* 2026-03-24 05:16:30.790137 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.790161 | orchestrator | 2026-03-24 05:16:30.790178 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:16:30.790193 | orchestrator | Tuesday 24 March 2026 05:15:56 +0000 (0:00:01.071) 0:26:37.539 ********* 2026-03-24 05:16:30.790208 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:16:30.790224 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:16:30.790241 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:16:30.790256 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:16:30.790305 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:16:30.790320 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:16:30.790340 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:16:30.790354 | orchestrator | 2026-03-24 05:16:30.790368 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:16:30.790384 | orchestrator | Tuesday 24 March 2026 05:15:58 +0000 (0:00:01.885) 0:26:39.425 ********* 2026-03-24 05:16:30.790398 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:16:30.790412 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:16:30.790427 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:16:30.790443 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:16:30.790457 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:16:30.790472 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:16:30.790505 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:16:30.790523 | orchestrator | 2026-03-24 05:16:30.790538 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:16:30.790553 | orchestrator | Tuesday 24 March 2026 05:16:00 +0000 (0:00:02.181) 0:26:41.606 ********* 2026-03-24 05:16:30.790568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-24 05:16:30.790585 | orchestrator | 2026-03-24 05:16:30.790627 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:16:30.790643 | orchestrator | Tuesday 24 March 2026 05:16:01 +0000 (0:00:01.111) 0:26:42.717 ********* 2026-03-24 05:16:30.790658 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-24 05:16:30.790673 | orchestrator | 2026-03-24 05:16:30.790688 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:16:30.790703 | orchestrator | Tuesday 24 March 2026 05:16:02 +0000 (0:00:01.143) 0:26:43.861 ********* 2026-03-24 05:16:30.790719 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.790734 | orchestrator | 2026-03-24 05:16:30.790750 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:16:30.790765 | orchestrator | Tuesday 24 March 2026 05:16:04 +0000 (0:00:01.504) 0:26:45.365 ********* 2026-03-24 05:16:30.790778 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.790792 | orchestrator | 2026-03-24 05:16:30.790806 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:16:30.790820 | orchestrator | Tuesday 24 March 2026 05:16:05 +0000 (0:00:01.113) 0:26:46.479 ********* 2026-03-24 05:16:30.790834 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.790848 | orchestrator | 2026-03-24 05:16:30.790861 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:16:30.790875 | orchestrator | Tuesday 24 March 2026 05:16:06 +0000 (0:00:01.122) 0:26:47.601 ********* 2026-03-24 05:16:30.790883 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.790890 | orchestrator | 2026-03-24 05:16:30.790898 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:16:30.790907 | orchestrator | Tuesday 24 March 2026 05:16:07 +0000 (0:00:01.128) 0:26:48.730 ********* 2026-03-24 05:16:30.790915 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.790922 | orchestrator | 2026-03-24 05:16:30.790930 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:16:30.790938 | orchestrator | Tuesday 24 March 2026 05:16:09 +0000 (0:00:01.518) 0:26:50.249 ********* 2026-03-24 05:16:30.790946 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.790966 | orchestrator | 2026-03-24 05:16:30.790974 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:16:30.790982 | orchestrator | Tuesday 24 March 2026 05:16:10 +0000 (0:00:01.116) 0:26:51.365 ********* 2026-03-24 05:16:30.790990 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.790997 | orchestrator | 2026-03-24 05:16:30.791006 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:16:30.791013 | orchestrator | Tuesday 24 March 2026 05:16:11 +0000 (0:00:01.111) 0:26:52.477 ********* 2026-03-24 05:16:30.791021 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.791029 | orchestrator | 2026-03-24 05:16:30.791037 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:16:30.791044 | orchestrator | Tuesday 24 March 2026 05:16:13 +0000 (0:00:01.531) 0:26:54.009 ********* 2026-03-24 05:16:30.791052 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.791061 | orchestrator | 2026-03-24 05:16:30.791075 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:16:30.791111 | orchestrator | Tuesday 24 March 2026 05:16:14 +0000 (0:00:01.528) 0:26:55.537 ********* 2026-03-24 05:16:30.791126 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791139 | orchestrator | 2026-03-24 05:16:30.791153 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:16:30.791161 | orchestrator | Tuesday 24 March 2026 05:16:15 +0000 (0:00:00.760) 0:26:56.298 ********* 2026-03-24 05:16:30.791168 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.791176 | orchestrator | 2026-03-24 05:16:30.791184 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:16:30.791192 | orchestrator | Tuesday 24 March 2026 05:16:16 +0000 (0:00:00.791) 0:26:57.089 ********* 2026-03-24 05:16:30.791199 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791207 | orchestrator | 2026-03-24 05:16:30.791214 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:16:30.791222 | orchestrator | Tuesday 24 March 2026 05:16:16 +0000 (0:00:00.765) 0:26:57.855 ********* 2026-03-24 05:16:30.791230 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791238 | orchestrator | 2026-03-24 05:16:30.791245 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:16:30.791253 | orchestrator | Tuesday 24 March 2026 05:16:17 +0000 (0:00:00.756) 0:26:58.611 ********* 2026-03-24 05:16:30.791261 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791268 | orchestrator | 2026-03-24 05:16:30.791276 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:16:30.791284 | orchestrator | Tuesday 24 March 2026 05:16:18 +0000 (0:00:00.761) 0:26:59.373 ********* 2026-03-24 05:16:30.791292 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791299 | orchestrator | 2026-03-24 05:16:30.791307 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:16:30.791315 | orchestrator | Tuesday 24 March 2026 05:16:19 +0000 (0:00:00.761) 0:27:00.134 ********* 2026-03-24 05:16:30.791323 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791330 | orchestrator | 2026-03-24 05:16:30.791338 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:16:30.791346 | orchestrator | Tuesday 24 March 2026 05:16:19 +0000 (0:00:00.760) 0:27:00.894 ********* 2026-03-24 05:16:30.791354 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.791361 | orchestrator | 2026-03-24 05:16:30.791369 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:16:30.791384 | orchestrator | Tuesday 24 March 2026 05:16:20 +0000 (0:00:00.805) 0:27:01.700 ********* 2026-03-24 05:16:30.791392 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.791400 | orchestrator | 2026-03-24 05:16:30.791408 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:16:30.791416 | orchestrator | Tuesday 24 March 2026 05:16:21 +0000 (0:00:00.800) 0:27:02.501 ********* 2026-03-24 05:16:30.791423 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:16:30.791437 | orchestrator | 2026-03-24 05:16:30.791446 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:16:30.791453 | orchestrator | Tuesday 24 March 2026 05:16:22 +0000 (0:00:00.795) 0:27:03.297 ********* 2026-03-24 05:16:30.791461 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791469 | orchestrator | 2026-03-24 05:16:30.791476 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:16:30.791484 | orchestrator | Tuesday 24 March 2026 05:16:23 +0000 (0:00:00.768) 0:27:04.066 ********* 2026-03-24 05:16:30.791492 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791500 | orchestrator | 2026-03-24 05:16:30.791507 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:16:30.791515 | orchestrator | Tuesday 24 March 2026 05:16:23 +0000 (0:00:00.758) 0:27:04.825 ********* 2026-03-24 05:16:30.791522 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791530 | orchestrator | 2026-03-24 05:16:30.791538 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:16:30.791545 | orchestrator | Tuesday 24 March 2026 05:16:24 +0000 (0:00:00.763) 0:27:05.588 ********* 2026-03-24 05:16:30.791553 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791561 | orchestrator | 2026-03-24 05:16:30.791568 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:16:30.791576 | orchestrator | Tuesday 24 March 2026 05:16:25 +0000 (0:00:00.784) 0:27:06.373 ********* 2026-03-24 05:16:30.791584 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791635 | orchestrator | 2026-03-24 05:16:30.791644 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:16:30.791651 | orchestrator | Tuesday 24 March 2026 05:16:26 +0000 (0:00:00.743) 0:27:07.116 ********* 2026-03-24 05:16:30.791659 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791667 | orchestrator | 2026-03-24 05:16:30.791674 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:16:30.791682 | orchestrator | Tuesday 24 March 2026 05:16:26 +0000 (0:00:00.758) 0:27:07.875 ********* 2026-03-24 05:16:30.791690 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791697 | orchestrator | 2026-03-24 05:16:30.791705 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:16:30.791713 | orchestrator | Tuesday 24 March 2026 05:16:27 +0000 (0:00:00.753) 0:27:08.628 ********* 2026-03-24 05:16:30.791720 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791728 | orchestrator | 2026-03-24 05:16:30.791736 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:16:30.791743 | orchestrator | Tuesday 24 March 2026 05:16:28 +0000 (0:00:00.792) 0:27:09.421 ********* 2026-03-24 05:16:30.791751 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791759 | orchestrator | 2026-03-24 05:16:30.791766 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:16:30.791774 | orchestrator | Tuesday 24 March 2026 05:16:29 +0000 (0:00:00.762) 0:27:10.184 ********* 2026-03-24 05:16:30.791782 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791789 | orchestrator | 2026-03-24 05:16:30.791797 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:16:30.791805 | orchestrator | Tuesday 24 March 2026 05:16:30 +0000 (0:00:00.740) 0:27:10.925 ********* 2026-03-24 05:16:30.791813 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:16:30.791821 | orchestrator | 2026-03-24 05:16:30.791834 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:17:15.769567 | orchestrator | Tuesday 24 March 2026 05:16:30 +0000 (0:00:00.751) 0:27:11.677 ********* 2026-03-24 05:17:15.769722 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.769740 | orchestrator | 2026-03-24 05:17:15.769753 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:17:15.769765 | orchestrator | Tuesday 24 March 2026 05:16:31 +0000 (0:00:00.754) 0:27:12.432 ********* 2026-03-24 05:17:15.769776 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:15.769811 | orchestrator | 2026-03-24 05:17:15.769824 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:17:15.769836 | orchestrator | Tuesday 24 March 2026 05:16:33 +0000 (0:00:01.615) 0:27:14.048 ********* 2026-03-24 05:17:15.769848 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:15.769858 | orchestrator | 2026-03-24 05:17:15.769869 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:17:15.769880 | orchestrator | Tuesday 24 March 2026 05:16:35 +0000 (0:00:02.123) 0:27:16.172 ********* 2026-03-24 05:17:15.769891 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-24 05:17:15.769904 | orchestrator | 2026-03-24 05:17:15.769915 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:17:15.769927 | orchestrator | Tuesday 24 March 2026 05:16:36 +0000 (0:00:01.085) 0:27:17.258 ********* 2026-03-24 05:17:15.769937 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.769948 | orchestrator | 2026-03-24 05:17:15.769959 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:17:15.769969 | orchestrator | Tuesday 24 March 2026 05:16:37 +0000 (0:00:01.123) 0:27:18.381 ********* 2026-03-24 05:17:15.769980 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.769990 | orchestrator | 2026-03-24 05:17:15.770002 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:17:15.770070 | orchestrator | Tuesday 24 March 2026 05:16:38 +0000 (0:00:01.106) 0:27:19.488 ********* 2026-03-24 05:17:15.770083 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:17:15.770095 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:17:15.770108 | orchestrator | 2026-03-24 05:17:15.770133 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:17:15.770144 | orchestrator | Tuesday 24 March 2026 05:16:40 +0000 (0:00:01.829) 0:27:21.317 ********* 2026-03-24 05:17:15.770156 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:15.770167 | orchestrator | 2026-03-24 05:17:15.770179 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:17:15.770191 | orchestrator | Tuesday 24 March 2026 05:16:41 +0000 (0:00:01.448) 0:27:22.767 ********* 2026-03-24 05:17:15.770203 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770214 | orchestrator | 2026-03-24 05:17:15.770225 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:17:15.770236 | orchestrator | Tuesday 24 March 2026 05:16:43 +0000 (0:00:01.140) 0:27:23.907 ********* 2026-03-24 05:17:15.770247 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770258 | orchestrator | 2026-03-24 05:17:15.770270 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:17:15.770282 | orchestrator | Tuesday 24 March 2026 05:16:43 +0000 (0:00:00.796) 0:27:24.704 ********* 2026-03-24 05:17:15.770293 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770304 | orchestrator | 2026-03-24 05:17:15.770316 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:17:15.770326 | orchestrator | Tuesday 24 March 2026 05:16:44 +0000 (0:00:00.753) 0:27:25.457 ********* 2026-03-24 05:17:15.770337 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-24 05:17:15.770348 | orchestrator | 2026-03-24 05:17:15.770359 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:17:15.770371 | orchestrator | Tuesday 24 March 2026 05:16:45 +0000 (0:00:01.145) 0:27:26.603 ********* 2026-03-24 05:17:15.770381 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:15.770393 | orchestrator | 2026-03-24 05:17:15.770404 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:17:15.770414 | orchestrator | Tuesday 24 March 2026 05:16:47 +0000 (0:00:01.739) 0:27:28.342 ********* 2026-03-24 05:17:15.770425 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:17:15.770444 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:17:15.770454 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:17:15.770465 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770476 | orchestrator | 2026-03-24 05:17:15.770487 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:17:15.770498 | orchestrator | Tuesday 24 March 2026 05:16:48 +0000 (0:00:01.133) 0:27:29.475 ********* 2026-03-24 05:17:15.770509 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770520 | orchestrator | 2026-03-24 05:17:15.770531 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:17:15.770543 | orchestrator | Tuesday 24 March 2026 05:16:49 +0000 (0:00:01.118) 0:27:30.594 ********* 2026-03-24 05:17:15.770554 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770566 | orchestrator | 2026-03-24 05:17:15.770577 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:17:15.770587 | orchestrator | Tuesday 24 March 2026 05:16:50 +0000 (0:00:01.194) 0:27:31.788 ********* 2026-03-24 05:17:15.770598 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770608 | orchestrator | 2026-03-24 05:17:15.770619 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:17:15.770629 | orchestrator | Tuesday 24 March 2026 05:16:52 +0000 (0:00:01.141) 0:27:32.930 ********* 2026-03-24 05:17:15.770657 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770668 | orchestrator | 2026-03-24 05:17:15.770696 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:17:15.770708 | orchestrator | Tuesday 24 March 2026 05:16:53 +0000 (0:00:01.152) 0:27:34.082 ********* 2026-03-24 05:17:15.770718 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770729 | orchestrator | 2026-03-24 05:17:15.770737 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:17:15.770743 | orchestrator | Tuesday 24 March 2026 05:16:53 +0000 (0:00:00.774) 0:27:34.857 ********* 2026-03-24 05:17:15.770753 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:15.770763 | orchestrator | 2026-03-24 05:17:15.770774 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:17:15.770784 | orchestrator | Tuesday 24 March 2026 05:16:56 +0000 (0:00:02.224) 0:27:37.082 ********* 2026-03-24 05:17:15.770794 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:15.770805 | orchestrator | 2026-03-24 05:17:15.770816 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:17:15.770826 | orchestrator | Tuesday 24 March 2026 05:16:56 +0000 (0:00:00.757) 0:27:37.839 ********* 2026-03-24 05:17:15.770836 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-24 05:17:15.770846 | orchestrator | 2026-03-24 05:17:15.770857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:17:15.770863 | orchestrator | Tuesday 24 March 2026 05:16:58 +0000 (0:00:01.114) 0:27:38.953 ********* 2026-03-24 05:17:15.770869 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770875 | orchestrator | 2026-03-24 05:17:15.770882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:17:15.770888 | orchestrator | Tuesday 24 March 2026 05:16:59 +0000 (0:00:01.154) 0:27:40.107 ********* 2026-03-24 05:17:15.770894 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770900 | orchestrator | 2026-03-24 05:17:15.770906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:17:15.770912 | orchestrator | Tuesday 24 March 2026 05:17:00 +0000 (0:00:01.135) 0:27:41.243 ********* 2026-03-24 05:17:15.770918 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770924 | orchestrator | 2026-03-24 05:17:15.770930 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:17:15.770942 | orchestrator | Tuesday 24 March 2026 05:17:01 +0000 (0:00:01.128) 0:27:42.372 ********* 2026-03-24 05:17:15.770955 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770961 | orchestrator | 2026-03-24 05:17:15.770967 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:17:15.770973 | orchestrator | Tuesday 24 March 2026 05:17:02 +0000 (0:00:01.221) 0:27:43.594 ********* 2026-03-24 05:17:15.770979 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.770985 | orchestrator | 2026-03-24 05:17:15.770991 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:17:15.770997 | orchestrator | Tuesday 24 March 2026 05:17:03 +0000 (0:00:01.113) 0:27:44.707 ********* 2026-03-24 05:17:15.771003 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.771010 | orchestrator | 2026-03-24 05:17:15.771016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:17:15.771022 | orchestrator | Tuesday 24 March 2026 05:17:04 +0000 (0:00:01.131) 0:27:45.839 ********* 2026-03-24 05:17:15.771028 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.771034 | orchestrator | 2026-03-24 05:17:15.771040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:17:15.771046 | orchestrator | Tuesday 24 March 2026 05:17:06 +0000 (0:00:01.171) 0:27:47.011 ********* 2026-03-24 05:17:15.771052 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:15.771058 | orchestrator | 2026-03-24 05:17:15.771064 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:17:15.771070 | orchestrator | Tuesday 24 March 2026 05:17:07 +0000 (0:00:01.153) 0:27:48.164 ********* 2026-03-24 05:17:15.771076 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:15.771083 | orchestrator | 2026-03-24 05:17:15.771089 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:17:15.771095 | orchestrator | Tuesday 24 March 2026 05:17:08 +0000 (0:00:00.814) 0:27:48.979 ********* 2026-03-24 05:17:15.771101 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-24 05:17:15.771107 | orchestrator | 2026-03-24 05:17:15.771113 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:17:15.771119 | orchestrator | Tuesday 24 March 2026 05:17:09 +0000 (0:00:01.107) 0:27:50.086 ********* 2026-03-24 05:17:15.771125 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-24 05:17:15.771131 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-24 05:17:15.771138 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-24 05:17:15.771149 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-24 05:17:15.771158 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-24 05:17:15.771167 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-24 05:17:15.771178 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-24 05:17:15.771194 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:17:15.771203 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:17:15.771213 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:17:15.771223 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:17:15.771233 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:17:15.771242 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:17:15.771253 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:17:15.771263 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-24 05:17:15.771273 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-24 05:17:15.771284 | orchestrator | 2026-03-24 05:17:15.771302 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:17:56.749723 | orchestrator | Tuesday 24 March 2026 05:17:15 +0000 (0:00:06.565) 0:27:56.651 ********* 2026-03-24 05:17:56.749835 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.749873 | orchestrator | 2026-03-24 05:17:56.749884 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:17:56.749893 | orchestrator | Tuesday 24 March 2026 05:17:16 +0000 (0:00:00.771) 0:27:57.422 ********* 2026-03-24 05:17:56.749901 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.749910 | orchestrator | 2026-03-24 05:17:56.749918 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:17:56.749927 | orchestrator | Tuesday 24 March 2026 05:17:17 +0000 (0:00:00.750) 0:27:58.173 ********* 2026-03-24 05:17:56.749937 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.749946 | orchestrator | 2026-03-24 05:17:56.749951 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:17:56.749956 | orchestrator | Tuesday 24 March 2026 05:17:18 +0000 (0:00:00.794) 0:27:58.968 ********* 2026-03-24 05:17:56.749961 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.749966 | orchestrator | 2026-03-24 05:17:56.749973 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:17:56.749981 | orchestrator | Tuesday 24 March 2026 05:17:18 +0000 (0:00:00.790) 0:27:59.758 ********* 2026-03-24 05:17:56.749990 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.749997 | orchestrator | 2026-03-24 05:17:56.750006 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:17:56.750014 | orchestrator | Tuesday 24 March 2026 05:17:19 +0000 (0:00:00.767) 0:28:00.526 ********* 2026-03-24 05:17:56.750075 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750083 | orchestrator | 2026-03-24 05:17:56.750091 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:17:56.750101 | orchestrator | Tuesday 24 March 2026 05:17:20 +0000 (0:00:00.783) 0:28:01.309 ********* 2026-03-24 05:17:56.750110 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750118 | orchestrator | 2026-03-24 05:17:56.750127 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:17:56.750149 | orchestrator | Tuesday 24 March 2026 05:17:21 +0000 (0:00:00.756) 0:28:02.066 ********* 2026-03-24 05:17:56.750184 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750193 | orchestrator | 2026-03-24 05:17:56.750201 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:17:56.750210 | orchestrator | Tuesday 24 March 2026 05:17:21 +0000 (0:00:00.767) 0:28:02.834 ********* 2026-03-24 05:17:56.750219 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750227 | orchestrator | 2026-03-24 05:17:56.750236 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:17:56.750244 | orchestrator | Tuesday 24 March 2026 05:17:22 +0000 (0:00:00.749) 0:28:03.584 ********* 2026-03-24 05:17:56.750252 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750261 | orchestrator | 2026-03-24 05:17:56.750270 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:17:56.750280 | orchestrator | Tuesday 24 March 2026 05:17:23 +0000 (0:00:01.261) 0:28:04.845 ********* 2026-03-24 05:17:56.750286 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750291 | orchestrator | 2026-03-24 05:17:56.750297 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:17:56.750303 | orchestrator | Tuesday 24 March 2026 05:17:24 +0000 (0:00:00.756) 0:28:05.602 ********* 2026-03-24 05:17:56.750309 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750314 | orchestrator | 2026-03-24 05:17:56.750320 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:17:56.750326 | orchestrator | Tuesday 24 March 2026 05:17:25 +0000 (0:00:00.788) 0:28:06.391 ********* 2026-03-24 05:17:56.750331 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750337 | orchestrator | 2026-03-24 05:17:56.750343 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:17:56.750348 | orchestrator | Tuesday 24 March 2026 05:17:26 +0000 (0:00:00.857) 0:28:07.248 ********* 2026-03-24 05:17:56.750361 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750366 | orchestrator | 2026-03-24 05:17:56.750372 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:17:56.750378 | orchestrator | Tuesday 24 March 2026 05:17:27 +0000 (0:00:00.776) 0:28:08.025 ********* 2026-03-24 05:17:56.750384 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750389 | orchestrator | 2026-03-24 05:17:56.750395 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:17:56.750401 | orchestrator | Tuesday 24 March 2026 05:17:27 +0000 (0:00:00.849) 0:28:08.874 ********* 2026-03-24 05:17:56.750406 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750412 | orchestrator | 2026-03-24 05:17:56.750418 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:17:56.750423 | orchestrator | Tuesday 24 March 2026 05:17:28 +0000 (0:00:00.763) 0:28:09.637 ********* 2026-03-24 05:17:56.750431 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750440 | orchestrator | 2026-03-24 05:17:56.750448 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:17:56.750459 | orchestrator | Tuesday 24 March 2026 05:17:29 +0000 (0:00:00.771) 0:28:10.408 ********* 2026-03-24 05:17:56.750467 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750476 | orchestrator | 2026-03-24 05:17:56.750485 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:17:56.750493 | orchestrator | Tuesday 24 March 2026 05:17:30 +0000 (0:00:00.764) 0:28:11.173 ********* 2026-03-24 05:17:56.750498 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750504 | orchestrator | 2026-03-24 05:17:56.750510 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:17:56.750516 | orchestrator | Tuesday 24 March 2026 05:17:31 +0000 (0:00:00.786) 0:28:11.959 ********* 2026-03-24 05:17:56.750521 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750527 | orchestrator | 2026-03-24 05:17:56.750553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:17:56.750562 | orchestrator | Tuesday 24 March 2026 05:17:31 +0000 (0:00:00.773) 0:28:12.732 ********* 2026-03-24 05:17:56.750571 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750580 | orchestrator | 2026-03-24 05:17:56.750588 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:17:56.750597 | orchestrator | Tuesday 24 March 2026 05:17:32 +0000 (0:00:00.770) 0:28:13.503 ********* 2026-03-24 05:17:56.750605 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:17:56.750610 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:17:56.750615 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:17:56.750620 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750625 | orchestrator | 2026-03-24 05:17:56.750630 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:17:56.750635 | orchestrator | Tuesday 24 March 2026 05:17:33 +0000 (0:00:01.376) 0:28:14.880 ********* 2026-03-24 05:17:56.750640 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:17:56.750647 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:17:56.750655 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:17:56.750677 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750709 | orchestrator | 2026-03-24 05:17:56.750726 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:17:56.750734 | orchestrator | Tuesday 24 March 2026 05:17:35 +0000 (0:00:01.326) 0:28:16.207 ********* 2026-03-24 05:17:56.750739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-24 05:17:56.750747 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-24 05:17:56.750755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-24 05:17:56.750769 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750778 | orchestrator | 2026-03-24 05:17:56.750786 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:17:56.750800 | orchestrator | Tuesday 24 March 2026 05:17:36 +0000 (0:00:01.036) 0:28:17.243 ********* 2026-03-24 05:17:56.750810 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750818 | orchestrator | 2026-03-24 05:17:56.750827 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:17:56.750835 | orchestrator | Tuesday 24 March 2026 05:17:37 +0000 (0:00:00.781) 0:28:18.024 ********* 2026-03-24 05:17:56.750845 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-24 05:17:56.750854 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.750863 | orchestrator | 2026-03-24 05:17:56.750871 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:17:56.750880 | orchestrator | Tuesday 24 March 2026 05:17:38 +0000 (0:00:00.879) 0:28:18.904 ********* 2026-03-24 05:17:56.750888 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:56.750897 | orchestrator | 2026-03-24 05:17:56.750905 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:17:56.750913 | orchestrator | Tuesday 24 March 2026 05:17:39 +0000 (0:00:01.420) 0:28:20.325 ********* 2026-03-24 05:17:56.750920 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:17:56.750929 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-24 05:17:56.750936 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:17:56.750945 | orchestrator | 2026-03-24 05:17:56.750954 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-24 05:17:56.750963 | orchestrator | Tuesday 24 March 2026 05:17:40 +0000 (0:00:01.335) 0:28:21.660 ********* 2026-03-24 05:17:56.750971 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-03-24 05:17:56.750979 | orchestrator | 2026-03-24 05:17:56.750987 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-24 05:17:56.750996 | orchestrator | Tuesday 24 March 2026 05:17:41 +0000 (0:00:01.080) 0:28:22.741 ********* 2026-03-24 05:17:56.751004 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:56.751013 | orchestrator | 2026-03-24 05:17:56.751021 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-24 05:17:56.751029 | orchestrator | Tuesday 24 March 2026 05:17:43 +0000 (0:00:01.487) 0:28:24.230 ********* 2026-03-24 05:17:56.751037 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:17:56.751046 | orchestrator | 2026-03-24 05:17:56.751055 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-24 05:17:56.751063 | orchestrator | Tuesday 24 March 2026 05:17:44 +0000 (0:00:01.122) 0:28:25.353 ********* 2026-03-24 05:17:56.751071 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:17:56.751080 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:17:56.751088 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:17:56.751098 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-03-24 05:17:56.751103 | orchestrator | 2026-03-24 05:17:56.751108 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-24 05:17:56.751113 | orchestrator | Tuesday 24 March 2026 05:17:52 +0000 (0:00:07.894) 0:28:33.248 ********* 2026-03-24 05:17:56.751118 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:17:56.751123 | orchestrator | 2026-03-24 05:17:56.751128 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-24 05:17:56.751132 | orchestrator | Tuesday 24 March 2026 05:17:53 +0000 (0:00:01.178) 0:28:34.427 ********* 2026-03-24 05:17:56.751137 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-24 05:17:56.751142 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-24 05:17:56.751153 | orchestrator | 2026-03-24 05:17:56.751165 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:18:42.844865 | orchestrator | Tuesday 24 March 2026 05:17:56 +0000 (0:00:03.205) 0:28:37.633 ********* 2026-03-24 05:18:42.844969 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-24 05:18:42.844986 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-24 05:18:42.844998 | orchestrator | 2026-03-24 05:18:42.845010 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-24 05:18:42.845021 | orchestrator | Tuesday 24 March 2026 05:17:58 +0000 (0:00:01.962) 0:28:39.596 ********* 2026-03-24 05:18:42.845032 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:18:42.845043 | orchestrator | 2026-03-24 05:18:42.845054 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-24 05:18:42.845065 | orchestrator | Tuesday 24 March 2026 05:18:00 +0000 (0:00:01.500) 0:28:41.096 ********* 2026-03-24 05:18:42.845076 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:18:42.845087 | orchestrator | 2026-03-24 05:18:42.845098 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-24 05:18:42.845109 | orchestrator | Tuesday 24 March 2026 05:18:00 +0000 (0:00:00.755) 0:28:41.851 ********* 2026-03-24 05:18:42.845120 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:18:42.845131 | orchestrator | 2026-03-24 05:18:42.845142 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-24 05:18:42.845153 | orchestrator | Tuesday 24 March 2026 05:18:01 +0000 (0:00:00.764) 0:28:42.616 ********* 2026-03-24 05:18:42.845163 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-03-24 05:18:42.845174 | orchestrator | 2026-03-24 05:18:42.845185 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-24 05:18:42.845196 | orchestrator | Tuesday 24 March 2026 05:18:02 +0000 (0:00:01.120) 0:28:43.737 ********* 2026-03-24 05:18:42.845207 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:18:42.845218 | orchestrator | 2026-03-24 05:18:42.845229 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-24 05:18:42.845240 | orchestrator | Tuesday 24 March 2026 05:18:03 +0000 (0:00:01.139) 0:28:44.877 ********* 2026-03-24 05:18:42.845251 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:18:42.845261 | orchestrator | 2026-03-24 05:18:42.845272 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-24 05:18:42.845296 | orchestrator | Tuesday 24 March 2026 05:18:05 +0000 (0:00:01.155) 0:28:46.033 ********* 2026-03-24 05:18:42.845308 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-03-24 05:18:42.845319 | orchestrator | 2026-03-24 05:18:42.845330 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-24 05:18:42.845341 | orchestrator | Tuesday 24 March 2026 05:18:06 +0000 (0:00:01.096) 0:28:47.129 ********* 2026-03-24 05:18:42.845351 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:18:42.845369 | orchestrator | 2026-03-24 05:18:42.845389 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-24 05:18:42.845408 | orchestrator | Tuesday 24 March 2026 05:18:08 +0000 (0:00:02.049) 0:28:49.179 ********* 2026-03-24 05:18:42.845440 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:18:42.845459 | orchestrator | 2026-03-24 05:18:42.845478 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-24 05:18:42.845497 | orchestrator | Tuesday 24 March 2026 05:18:10 +0000 (0:00:01.945) 0:28:51.125 ********* 2026-03-24 05:18:42.845516 | orchestrator | ok: [testbed-node-1] 2026-03-24 05:18:42.845535 | orchestrator | 2026-03-24 05:18:42.845555 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-24 05:18:42.845575 | orchestrator | Tuesday 24 March 2026 05:18:12 +0000 (0:00:02.430) 0:28:53.555 ********* 2026-03-24 05:18:42.845594 | orchestrator | changed: [testbed-node-1] 2026-03-24 05:18:42.845613 | orchestrator | 2026-03-24 05:18:42.845633 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-24 05:18:42.845681 | orchestrator | Tuesday 24 March 2026 05:18:16 +0000 (0:00:03.561) 0:28:57.117 ********* 2026-03-24 05:18:42.845700 | orchestrator | skipping: [testbed-node-1] 2026-03-24 05:18:42.845712 | orchestrator | 2026-03-24 05:18:42.845723 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-24 05:18:42.845760 | orchestrator | 2026-03-24 05:18:42.845773 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-24 05:18:42.845784 | orchestrator | Tuesday 24 March 2026 05:18:17 +0000 (0:00:00.967) 0:28:58.084 ********* 2026-03-24 05:18:42.845795 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:18:42.845806 | orchestrator | 2026-03-24 05:18:42.845817 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-24 05:18:42.845827 | orchestrator | Tuesday 24 March 2026 05:18:19 +0000 (0:00:02.558) 0:29:00.643 ********* 2026-03-24 05:18:42.845838 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:18:42.845848 | orchestrator | 2026-03-24 05:18:42.845859 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:18:42.845870 | orchestrator | Tuesday 24 March 2026 05:18:21 +0000 (0:00:02.095) 0:29:02.738 ********* 2026-03-24 05:18:42.845880 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-24 05:18:42.845891 | orchestrator | 2026-03-24 05:18:42.845901 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:18:42.845912 | orchestrator | Tuesday 24 March 2026 05:18:22 +0000 (0:00:01.088) 0:29:03.827 ********* 2026-03-24 05:18:42.845923 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.845933 | orchestrator | 2026-03-24 05:18:42.845944 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:18:42.845955 | orchestrator | Tuesday 24 March 2026 05:18:24 +0000 (0:00:01.499) 0:29:05.326 ********* 2026-03-24 05:18:42.845965 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.845976 | orchestrator | 2026-03-24 05:18:42.845986 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:18:42.845997 | orchestrator | Tuesday 24 March 2026 05:18:25 +0000 (0:00:01.172) 0:29:06.499 ********* 2026-03-24 05:18:42.846008 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.846074 | orchestrator | 2026-03-24 05:18:42.846090 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:18:42.846176 | orchestrator | Tuesday 24 March 2026 05:18:27 +0000 (0:00:01.458) 0:29:07.957 ********* 2026-03-24 05:18:42.846202 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.846221 | orchestrator | 2026-03-24 05:18:42.846239 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:18:42.846259 | orchestrator | Tuesday 24 March 2026 05:18:28 +0000 (0:00:01.119) 0:29:09.076 ********* 2026-03-24 05:18:42.846278 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.846296 | orchestrator | 2026-03-24 05:18:42.846313 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:18:42.846324 | orchestrator | Tuesday 24 March 2026 05:18:29 +0000 (0:00:01.144) 0:29:10.221 ********* 2026-03-24 05:18:42.846335 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.846345 | orchestrator | 2026-03-24 05:18:42.846356 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:18:42.846367 | orchestrator | Tuesday 24 March 2026 05:18:30 +0000 (0:00:01.232) 0:29:11.454 ********* 2026-03-24 05:18:42.846377 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:18:42.846388 | orchestrator | 2026-03-24 05:18:42.846398 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:18:42.846409 | orchestrator | Tuesday 24 March 2026 05:18:31 +0000 (0:00:01.131) 0:29:12.585 ********* 2026-03-24 05:18:42.846420 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.846430 | orchestrator | 2026-03-24 05:18:42.846476 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:18:42.846508 | orchestrator | Tuesday 24 March 2026 05:18:32 +0000 (0:00:01.108) 0:29:13.694 ********* 2026-03-24 05:18:42.846532 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:18:42.846543 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:18:42.846554 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:18:42.846565 | orchestrator | 2026-03-24 05:18:42.846576 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:18:42.846587 | orchestrator | Tuesday 24 March 2026 05:18:34 +0000 (0:00:01.636) 0:29:15.330 ********* 2026-03-24 05:18:42.846597 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:18:42.846608 | orchestrator | 2026-03-24 05:18:42.846627 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:18:42.846638 | orchestrator | Tuesday 24 March 2026 05:18:35 +0000 (0:00:01.265) 0:29:16.596 ********* 2026-03-24 05:18:42.846649 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:18:42.846659 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:18:42.846670 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:18:42.846680 | orchestrator | 2026-03-24 05:18:42.846691 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:18:42.846702 | orchestrator | Tuesday 24 March 2026 05:18:38 +0000 (0:00:02.702) 0:29:19.298 ********* 2026-03-24 05:18:42.846712 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 05:18:42.846723 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 05:18:42.846755 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 05:18:42.846767 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:18:42.846778 | orchestrator | 2026-03-24 05:18:42.846789 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:18:42.846799 | orchestrator | Tuesday 24 March 2026 05:18:39 +0000 (0:00:01.410) 0:29:20.709 ********* 2026-03-24 05:18:42.846811 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:18:42.846825 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:18:42.846836 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:18:42.846847 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:18:42.846858 | orchestrator | 2026-03-24 05:18:42.846869 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:18:42.846879 | orchestrator | Tuesday 24 March 2026 05:18:41 +0000 (0:00:01.889) 0:29:22.599 ********* 2026-03-24 05:18:42.846892 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:18:42.846916 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:02.432350 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:02.432447 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.432459 | orchestrator | 2026-03-24 05:19:02.432469 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:19:02.432477 | orchestrator | Tuesday 24 March 2026 05:18:42 +0000 (0:00:01.134) 0:29:23.733 ********* 2026-03-24 05:19:02.432487 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:18:36.201267', 'end': '2026-03-24 05:18:36.246859', 'delta': '0:00:00.045592', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:19:02.432535 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:18:36.740482', 'end': '2026-03-24 05:18:36.794506', 'delta': '0:00:00.054024', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:19:02.432545 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:18:37.244145', 'end': '2026-03-24 05:18:37.289883', 'delta': '0:00:00.045738', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:19:02.432554 | orchestrator | 2026-03-24 05:19:02.432562 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:19:02.432569 | orchestrator | Tuesday 24 March 2026 05:18:44 +0000 (0:00:01.231) 0:29:24.964 ********* 2026-03-24 05:19:02.432577 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:02.432586 | orchestrator | 2026-03-24 05:19:02.432594 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:19:02.432601 | orchestrator | Tuesday 24 March 2026 05:18:45 +0000 (0:00:01.247) 0:29:26.212 ********* 2026-03-24 05:19:02.432609 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.432617 | orchestrator | 2026-03-24 05:19:02.432624 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:19:02.432632 | orchestrator | Tuesday 24 March 2026 05:18:46 +0000 (0:00:01.569) 0:29:27.781 ********* 2026-03-24 05:19:02.432640 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:02.432652 | orchestrator | 2026-03-24 05:19:02.432688 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:19:02.432702 | orchestrator | Tuesday 24 March 2026 05:18:48 +0000 (0:00:01.164) 0:29:28.946 ********* 2026-03-24 05:19:02.432714 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:19:02.432726 | orchestrator | 2026-03-24 05:19:02.432739 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:19:02.432806 | orchestrator | Tuesday 24 March 2026 05:18:50 +0000 (0:00:01.955) 0:29:30.902 ********* 2026-03-24 05:19:02.432820 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:02.432832 | orchestrator | 2026-03-24 05:19:02.432843 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:19:02.432855 | orchestrator | Tuesday 24 March 2026 05:18:51 +0000 (0:00:01.142) 0:29:32.045 ********* 2026-03-24 05:19:02.432887 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.432900 | orchestrator | 2026-03-24 05:19:02.432913 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:19:02.432927 | orchestrator | Tuesday 24 March 2026 05:18:52 +0000 (0:00:01.101) 0:29:33.146 ********* 2026-03-24 05:19:02.432939 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.432952 | orchestrator | 2026-03-24 05:19:02.432960 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:19:02.432968 | orchestrator | Tuesday 24 March 2026 05:18:53 +0000 (0:00:01.199) 0:29:34.346 ********* 2026-03-24 05:19:02.432977 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.432985 | orchestrator | 2026-03-24 05:19:02.432994 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:19:02.433002 | orchestrator | Tuesday 24 March 2026 05:18:54 +0000 (0:00:01.109) 0:29:35.455 ********* 2026-03-24 05:19:02.433010 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.433018 | orchestrator | 2026-03-24 05:19:02.433027 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:19:02.433035 | orchestrator | Tuesday 24 March 2026 05:18:55 +0000 (0:00:01.109) 0:29:36.565 ********* 2026-03-24 05:19:02.433043 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.433051 | orchestrator | 2026-03-24 05:19:02.433059 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:19:02.433067 | orchestrator | Tuesday 24 March 2026 05:18:56 +0000 (0:00:01.106) 0:29:37.672 ********* 2026-03-24 05:19:02.433076 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.433084 | orchestrator | 2026-03-24 05:19:02.433092 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:19:02.433100 | orchestrator | Tuesday 24 March 2026 05:18:57 +0000 (0:00:01.112) 0:29:38.784 ********* 2026-03-24 05:19:02.433108 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.433116 | orchestrator | 2026-03-24 05:19:02.433125 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:19:02.433132 | orchestrator | Tuesday 24 March 2026 05:18:58 +0000 (0:00:01.093) 0:29:39.877 ********* 2026-03-24 05:19:02.433141 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.433149 | orchestrator | 2026-03-24 05:19:02.433165 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:19:02.433174 | orchestrator | Tuesday 24 March 2026 05:19:00 +0000 (0:00:01.093) 0:29:40.971 ********* 2026-03-24 05:19:02.433182 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:02.433191 | orchestrator | 2026-03-24 05:19:02.433199 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:19:02.433207 | orchestrator | Tuesday 24 March 2026 05:19:01 +0000 (0:00:01.132) 0:29:42.103 ********* 2026-03-24 05:19:02.433216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:02.433233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:02.433242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:02.433252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:19:02.433263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:02.433278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:03.661884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:03.662088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4fc154b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:19:03.662160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:03.662182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:19:03.662205 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:03.662233 | orchestrator | 2026-03-24 05:19:03.662258 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:19:03.662284 | orchestrator | Tuesday 24 March 2026 05:19:02 +0000 (0:00:01.214) 0:29:43.317 ********* 2026-03-24 05:19:03.662348 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:03.662371 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:03.662395 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:03.662427 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:03.662444 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:03.662461 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:03.662478 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:03.662513 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f4fc154b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1', 'scsi-SQEMU_QEMU_HARDDISK_f4fc154b-cdf9-4366-8d70-cd811913fdc6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:37.311925 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:37.312073 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:19:37.312103 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.312127 | orchestrator | 2026-03-24 05:19:37.312149 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:19:37.312169 | orchestrator | Tuesday 24 March 2026 05:19:03 +0000 (0:00:01.232) 0:29:44.550 ********* 2026-03-24 05:19:37.312187 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.312206 | orchestrator | 2026-03-24 05:19:37.312224 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:19:37.312244 | orchestrator | Tuesday 24 March 2026 05:19:05 +0000 (0:00:01.521) 0:29:46.072 ********* 2026-03-24 05:19:37.312262 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.312280 | orchestrator | 2026-03-24 05:19:37.312299 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:19:37.312317 | orchestrator | Tuesday 24 March 2026 05:19:06 +0000 (0:00:01.116) 0:29:47.188 ********* 2026-03-24 05:19:37.312336 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.312356 | orchestrator | 2026-03-24 05:19:37.312375 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:19:37.312394 | orchestrator | Tuesday 24 March 2026 05:19:07 +0000 (0:00:01.447) 0:29:48.636 ********* 2026-03-24 05:19:37.312413 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.312432 | orchestrator | 2026-03-24 05:19:37.312451 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:19:37.312472 | orchestrator | Tuesday 24 March 2026 05:19:08 +0000 (0:00:01.089) 0:29:49.725 ********* 2026-03-24 05:19:37.312491 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.312510 | orchestrator | 2026-03-24 05:19:37.312566 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:19:37.312587 | orchestrator | Tuesday 24 March 2026 05:19:10 +0000 (0:00:01.203) 0:29:50.928 ********* 2026-03-24 05:19:37.312605 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.312623 | orchestrator | 2026-03-24 05:19:37.312641 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:19:37.312658 | orchestrator | Tuesday 24 March 2026 05:19:11 +0000 (0:00:01.106) 0:29:52.035 ********* 2026-03-24 05:19:37.312676 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-24 05:19:37.312696 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-24 05:19:37.312714 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:19:37.312734 | orchestrator | 2026-03-24 05:19:37.312745 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:19:37.312771 | orchestrator | Tuesday 24 March 2026 05:19:12 +0000 (0:00:01.629) 0:29:53.665 ********* 2026-03-24 05:19:37.312810 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-24 05:19:37.312822 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-24 05:19:37.312833 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-24 05:19:37.312844 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.312854 | orchestrator | 2026-03-24 05:19:37.312865 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:19:37.312876 | orchestrator | Tuesday 24 March 2026 05:19:13 +0000 (0:00:01.131) 0:29:54.796 ********* 2026-03-24 05:19:37.312886 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.312897 | orchestrator | 2026-03-24 05:19:37.312908 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:19:37.312918 | orchestrator | Tuesday 24 March 2026 05:19:15 +0000 (0:00:01.145) 0:29:55.941 ********* 2026-03-24 05:19:37.312929 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:19:37.312941 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:19:37.312952 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:19:37.312963 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:19:37.312973 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:19:37.312984 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:19:37.313015 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:19:37.313026 | orchestrator | 2026-03-24 05:19:37.313037 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:19:37.313048 | orchestrator | Tuesday 24 March 2026 05:19:17 +0000 (0:00:02.083) 0:29:58.025 ********* 2026-03-24 05:19:37.313059 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:19:37.313069 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:19:37.313080 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:19:37.313090 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:19:37.313101 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:19:37.313112 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:19:37.313122 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:19:37.313133 | orchestrator | 2026-03-24 05:19:37.313143 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:19:37.313154 | orchestrator | Tuesday 24 March 2026 05:19:19 +0000 (0:00:02.208) 0:30:00.234 ********* 2026-03-24 05:19:37.313164 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-24 05:19:37.313188 | orchestrator | 2026-03-24 05:19:37.313199 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:19:37.313209 | orchestrator | Tuesday 24 March 2026 05:19:20 +0000 (0:00:01.222) 0:30:01.457 ********* 2026-03-24 05:19:37.313220 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-24 05:19:37.313231 | orchestrator | 2026-03-24 05:19:37.313242 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:19:37.313252 | orchestrator | Tuesday 24 March 2026 05:19:21 +0000 (0:00:01.107) 0:30:02.564 ********* 2026-03-24 05:19:37.313263 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.313274 | orchestrator | 2026-03-24 05:19:37.313284 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:19:37.313295 | orchestrator | Tuesday 24 March 2026 05:19:23 +0000 (0:00:01.534) 0:30:04.099 ********* 2026-03-24 05:19:37.313306 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313316 | orchestrator | 2026-03-24 05:19:37.313327 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:19:37.313338 | orchestrator | Tuesday 24 March 2026 05:19:24 +0000 (0:00:01.145) 0:30:05.245 ********* 2026-03-24 05:19:37.313348 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313359 | orchestrator | 2026-03-24 05:19:37.313369 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:19:37.313380 | orchestrator | Tuesday 24 March 2026 05:19:25 +0000 (0:00:01.126) 0:30:06.371 ********* 2026-03-24 05:19:37.313391 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313401 | orchestrator | 2026-03-24 05:19:37.313412 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:19:37.313423 | orchestrator | Tuesday 24 March 2026 05:19:26 +0000 (0:00:01.138) 0:30:07.510 ********* 2026-03-24 05:19:37.313433 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.313444 | orchestrator | 2026-03-24 05:19:37.313455 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:19:37.313465 | orchestrator | Tuesday 24 March 2026 05:19:28 +0000 (0:00:01.545) 0:30:09.055 ********* 2026-03-24 05:19:37.313476 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313486 | orchestrator | 2026-03-24 05:19:37.313505 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:19:37.313525 | orchestrator | Tuesday 24 March 2026 05:19:29 +0000 (0:00:01.105) 0:30:10.160 ********* 2026-03-24 05:19:37.313545 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313565 | orchestrator | 2026-03-24 05:19:37.313585 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:19:37.313614 | orchestrator | Tuesday 24 March 2026 05:19:30 +0000 (0:00:01.136) 0:30:11.297 ********* 2026-03-24 05:19:37.313635 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.313655 | orchestrator | 2026-03-24 05:19:37.313674 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:19:37.313685 | orchestrator | Tuesday 24 March 2026 05:19:31 +0000 (0:00:01.523) 0:30:12.820 ********* 2026-03-24 05:19:37.313695 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.313706 | orchestrator | 2026-03-24 05:19:37.313717 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:19:37.313727 | orchestrator | Tuesday 24 March 2026 05:19:33 +0000 (0:00:01.534) 0:30:14.355 ********* 2026-03-24 05:19:37.313738 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313749 | orchestrator | 2026-03-24 05:19:37.313759 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:19:37.313770 | orchestrator | Tuesday 24 March 2026 05:19:34 +0000 (0:00:00.759) 0:30:15.115 ********* 2026-03-24 05:19:37.313780 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:19:37.313823 | orchestrator | 2026-03-24 05:19:37.313843 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:19:37.313861 | orchestrator | Tuesday 24 March 2026 05:19:35 +0000 (0:00:00.804) 0:30:15.919 ********* 2026-03-24 05:19:37.313892 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313910 | orchestrator | 2026-03-24 05:19:37.313928 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:19:37.313946 | orchestrator | Tuesday 24 March 2026 05:19:35 +0000 (0:00:00.766) 0:30:16.685 ********* 2026-03-24 05:19:37.313966 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:19:37.313984 | orchestrator | 2026-03-24 05:19:37.314001 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:19:37.314085 | orchestrator | Tuesday 24 March 2026 05:19:36 +0000 (0:00:00.751) 0:30:17.437 ********* 2026-03-24 05:19:37.314127 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.157757 | orchestrator | 2026-03-24 05:20:17.157968 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:20:17.157994 | orchestrator | Tuesday 24 March 2026 05:19:37 +0000 (0:00:00.764) 0:30:18.201 ********* 2026-03-24 05:20:17.158007 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158084 | orchestrator | 2026-03-24 05:20:17.158097 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:20:17.158109 | orchestrator | Tuesday 24 March 2026 05:19:38 +0000 (0:00:00.757) 0:30:18.959 ********* 2026-03-24 05:20:17.158120 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158131 | orchestrator | 2026-03-24 05:20:17.158177 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:20:17.158189 | orchestrator | Tuesday 24 March 2026 05:19:38 +0000 (0:00:00.767) 0:30:19.726 ********* 2026-03-24 05:20:17.158201 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.158221 | orchestrator | 2026-03-24 05:20:17.158239 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:20:17.158258 | orchestrator | Tuesday 24 March 2026 05:19:39 +0000 (0:00:00.802) 0:30:20.529 ********* 2026-03-24 05:20:17.158278 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.158298 | orchestrator | 2026-03-24 05:20:17.158318 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:20:17.158366 | orchestrator | Tuesday 24 March 2026 05:19:40 +0000 (0:00:00.842) 0:30:21.371 ********* 2026-03-24 05:20:17.158380 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.158393 | orchestrator | 2026-03-24 05:20:17.158406 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:20:17.158418 | orchestrator | Tuesday 24 March 2026 05:19:41 +0000 (0:00:00.784) 0:30:22.156 ********* 2026-03-24 05:20:17.158431 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158444 | orchestrator | 2026-03-24 05:20:17.158456 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:20:17.158469 | orchestrator | Tuesday 24 March 2026 05:19:42 +0000 (0:00:00.760) 0:30:22.917 ********* 2026-03-24 05:20:17.158481 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158492 | orchestrator | 2026-03-24 05:20:17.158503 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:20:17.158513 | orchestrator | Tuesday 24 March 2026 05:19:42 +0000 (0:00:00.762) 0:30:23.679 ********* 2026-03-24 05:20:17.158524 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158535 | orchestrator | 2026-03-24 05:20:17.158545 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:20:17.158556 | orchestrator | Tuesday 24 March 2026 05:19:43 +0000 (0:00:00.782) 0:30:24.462 ********* 2026-03-24 05:20:17.158566 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158591 | orchestrator | 2026-03-24 05:20:17.158602 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:20:17.158613 | orchestrator | Tuesday 24 March 2026 05:19:44 +0000 (0:00:00.799) 0:30:25.261 ********* 2026-03-24 05:20:17.158624 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158634 | orchestrator | 2026-03-24 05:20:17.158645 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:20:17.158656 | orchestrator | Tuesday 24 March 2026 05:19:45 +0000 (0:00:00.751) 0:30:26.013 ********* 2026-03-24 05:20:17.158701 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158713 | orchestrator | 2026-03-24 05:20:17.158724 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:20:17.158735 | orchestrator | Tuesday 24 March 2026 05:19:45 +0000 (0:00:00.784) 0:30:26.798 ********* 2026-03-24 05:20:17.158745 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158756 | orchestrator | 2026-03-24 05:20:17.158767 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:20:17.158779 | orchestrator | Tuesday 24 March 2026 05:19:46 +0000 (0:00:00.761) 0:30:27.560 ********* 2026-03-24 05:20:17.158789 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158800 | orchestrator | 2026-03-24 05:20:17.158810 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:20:17.158872 | orchestrator | Tuesday 24 March 2026 05:19:47 +0000 (0:00:00.795) 0:30:28.356 ********* 2026-03-24 05:20:17.158883 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158894 | orchestrator | 2026-03-24 05:20:17.158918 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:20:17.158929 | orchestrator | Tuesday 24 March 2026 05:19:48 +0000 (0:00:00.749) 0:30:29.105 ********* 2026-03-24 05:20:17.158940 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158951 | orchestrator | 2026-03-24 05:20:17.158961 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:20:17.158972 | orchestrator | Tuesday 24 March 2026 05:19:49 +0000 (0:00:00.815) 0:30:29.921 ********* 2026-03-24 05:20:17.158982 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.158993 | orchestrator | 2026-03-24 05:20:17.159003 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:20:17.159014 | orchestrator | Tuesday 24 March 2026 05:19:49 +0000 (0:00:00.766) 0:30:30.688 ********* 2026-03-24 05:20:17.159025 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159035 | orchestrator | 2026-03-24 05:20:17.159046 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:20:17.159057 | orchestrator | Tuesday 24 March 2026 05:19:50 +0000 (0:00:00.783) 0:30:31.471 ********* 2026-03-24 05:20:17.159067 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.159078 | orchestrator | 2026-03-24 05:20:17.159089 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:20:17.159099 | orchestrator | Tuesday 24 March 2026 05:19:52 +0000 (0:00:01.601) 0:30:33.073 ********* 2026-03-24 05:20:17.159110 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.159121 | orchestrator | 2026-03-24 05:20:17.159131 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:20:17.159142 | orchestrator | Tuesday 24 March 2026 05:19:54 +0000 (0:00:02.019) 0:30:35.093 ********* 2026-03-24 05:20:17.159153 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-24 05:20:17.159165 | orchestrator | 2026-03-24 05:20:17.159197 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:20:17.159209 | orchestrator | Tuesday 24 March 2026 05:19:55 +0000 (0:00:01.223) 0:30:36.317 ********* 2026-03-24 05:20:17.159220 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159230 | orchestrator | 2026-03-24 05:20:17.159241 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:20:17.159254 | orchestrator | Tuesday 24 March 2026 05:19:56 +0000 (0:00:01.110) 0:30:37.428 ********* 2026-03-24 05:20:17.159273 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159292 | orchestrator | 2026-03-24 05:20:17.159311 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:20:17.159331 | orchestrator | Tuesday 24 March 2026 05:19:57 +0000 (0:00:01.094) 0:30:38.522 ********* 2026-03-24 05:20:17.159350 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:20:17.159369 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:20:17.159400 | orchestrator | 2026-03-24 05:20:17.159416 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:20:17.159434 | orchestrator | Tuesday 24 March 2026 05:19:59 +0000 (0:00:01.817) 0:30:40.340 ********* 2026-03-24 05:20:17.159454 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.159467 | orchestrator | 2026-03-24 05:20:17.159477 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:20:17.159488 | orchestrator | Tuesday 24 March 2026 05:20:00 +0000 (0:00:01.436) 0:30:41.777 ********* 2026-03-24 05:20:17.159498 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159509 | orchestrator | 2026-03-24 05:20:17.159519 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:20:17.159530 | orchestrator | Tuesday 24 March 2026 05:20:02 +0000 (0:00:01.151) 0:30:42.929 ********* 2026-03-24 05:20:17.159540 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159551 | orchestrator | 2026-03-24 05:20:17.159562 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:20:17.159572 | orchestrator | Tuesday 24 March 2026 05:20:02 +0000 (0:00:00.765) 0:30:43.694 ********* 2026-03-24 05:20:17.159583 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159593 | orchestrator | 2026-03-24 05:20:17.159604 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:20:17.159614 | orchestrator | Tuesday 24 March 2026 05:20:03 +0000 (0:00:00.760) 0:30:44.455 ********* 2026-03-24 05:20:17.159625 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-24 05:20:17.159635 | orchestrator | 2026-03-24 05:20:17.159646 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:20:17.159657 | orchestrator | Tuesday 24 March 2026 05:20:04 +0000 (0:00:01.088) 0:30:45.544 ********* 2026-03-24 05:20:17.159667 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.159678 | orchestrator | 2026-03-24 05:20:17.159688 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:20:17.159699 | orchestrator | Tuesday 24 March 2026 05:20:06 +0000 (0:00:01.735) 0:30:47.279 ********* 2026-03-24 05:20:17.159710 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:20:17.159720 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:20:17.159731 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:20:17.159741 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159752 | orchestrator | 2026-03-24 05:20:17.159762 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:20:17.159773 | orchestrator | Tuesday 24 March 2026 05:20:07 +0000 (0:00:01.151) 0:30:48.431 ********* 2026-03-24 05:20:17.159783 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159794 | orchestrator | 2026-03-24 05:20:17.159805 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:20:17.159841 | orchestrator | Tuesday 24 March 2026 05:20:08 +0000 (0:00:01.162) 0:30:49.594 ********* 2026-03-24 05:20:17.159861 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159872 | orchestrator | 2026-03-24 05:20:17.159890 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:20:17.159901 | orchestrator | Tuesday 24 March 2026 05:20:09 +0000 (0:00:01.151) 0:30:50.746 ********* 2026-03-24 05:20:17.159912 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159923 | orchestrator | 2026-03-24 05:20:17.159933 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:20:17.159944 | orchestrator | Tuesday 24 March 2026 05:20:10 +0000 (0:00:01.126) 0:30:51.872 ********* 2026-03-24 05:20:17.159954 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.159965 | orchestrator | 2026-03-24 05:20:17.159976 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:20:17.159994 | orchestrator | Tuesday 24 March 2026 05:20:12 +0000 (0:00:01.154) 0:30:53.026 ********* 2026-03-24 05:20:17.160005 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:17.160015 | orchestrator | 2026-03-24 05:20:17.160026 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:20:17.160037 | orchestrator | Tuesday 24 March 2026 05:20:12 +0000 (0:00:00.780) 0:30:53.806 ********* 2026-03-24 05:20:17.160047 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.160058 | orchestrator | 2026-03-24 05:20:17.160069 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:20:17.160079 | orchestrator | Tuesday 24 March 2026 05:20:15 +0000 (0:00:02.309) 0:30:56.116 ********* 2026-03-24 05:20:17.160090 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:17.160101 | orchestrator | 2026-03-24 05:20:17.160111 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:20:17.160122 | orchestrator | Tuesday 24 March 2026 05:20:16 +0000 (0:00:00.829) 0:30:56.946 ********* 2026-03-24 05:20:17.160132 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-24 05:20:17.160143 | orchestrator | 2026-03-24 05:20:17.160162 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:20:53.176458 | orchestrator | Tuesday 24 March 2026 05:20:17 +0000 (0:00:01.099) 0:30:58.045 ********* 2026-03-24 05:20:53.176568 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176585 | orchestrator | 2026-03-24 05:20:53.176597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:20:53.176608 | orchestrator | Tuesday 24 March 2026 05:20:18 +0000 (0:00:01.122) 0:30:59.168 ********* 2026-03-24 05:20:53.176618 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176628 | orchestrator | 2026-03-24 05:20:53.176638 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:20:53.176648 | orchestrator | Tuesday 24 March 2026 05:20:19 +0000 (0:00:01.105) 0:31:00.274 ********* 2026-03-24 05:20:53.176658 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176668 | orchestrator | 2026-03-24 05:20:53.176678 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:20:53.176688 | orchestrator | Tuesday 24 March 2026 05:20:20 +0000 (0:00:01.164) 0:31:01.439 ********* 2026-03-24 05:20:53.176697 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176707 | orchestrator | 2026-03-24 05:20:53.176717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:20:53.176726 | orchestrator | Tuesday 24 March 2026 05:20:21 +0000 (0:00:01.117) 0:31:02.556 ********* 2026-03-24 05:20:53.176736 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176746 | orchestrator | 2026-03-24 05:20:53.176755 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:20:53.176765 | orchestrator | Tuesday 24 March 2026 05:20:22 +0000 (0:00:01.115) 0:31:03.672 ********* 2026-03-24 05:20:53.176775 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176785 | orchestrator | 2026-03-24 05:20:53.176795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:20:53.176805 | orchestrator | Tuesday 24 March 2026 05:20:23 +0000 (0:00:01.161) 0:31:04.833 ********* 2026-03-24 05:20:53.176814 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176824 | orchestrator | 2026-03-24 05:20:53.176834 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:20:53.176914 | orchestrator | Tuesday 24 March 2026 05:20:25 +0000 (0:00:01.125) 0:31:05.958 ********* 2026-03-24 05:20:53.176927 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.176937 | orchestrator | 2026-03-24 05:20:53.176946 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:20:53.176956 | orchestrator | Tuesday 24 March 2026 05:20:26 +0000 (0:00:01.136) 0:31:07.095 ********* 2026-03-24 05:20:53.176966 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:20:53.176976 | orchestrator | 2026-03-24 05:20:53.176986 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:20:53.177021 | orchestrator | Tuesday 24 March 2026 05:20:27 +0000 (0:00:00.814) 0:31:07.910 ********* 2026-03-24 05:20:53.177034 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-24 05:20:53.177046 | orchestrator | 2026-03-24 05:20:53.177057 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:20:53.177068 | orchestrator | Tuesday 24 March 2026 05:20:28 +0000 (0:00:01.105) 0:31:09.016 ********* 2026-03-24 05:20:53.177079 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-24 05:20:53.177091 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-24 05:20:53.177101 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-24 05:20:53.177112 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-24 05:20:53.177123 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-24 05:20:53.177133 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-24 05:20:53.177144 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-24 05:20:53.177155 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:20:53.177166 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:20:53.177177 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:20:53.177202 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:20:53.177214 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:20:53.177225 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:20:53.177236 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:20:53.177247 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-24 05:20:53.177258 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-24 05:20:53.177268 | orchestrator | 2026-03-24 05:20:53.177280 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:20:53.177291 | orchestrator | Tuesday 24 March 2026 05:20:34 +0000 (0:00:06.562) 0:31:15.579 ********* 2026-03-24 05:20:53.177302 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177313 | orchestrator | 2026-03-24 05:20:53.177323 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:20:53.177335 | orchestrator | Tuesday 24 March 2026 05:20:35 +0000 (0:00:00.809) 0:31:16.388 ********* 2026-03-24 05:20:53.177346 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177357 | orchestrator | 2026-03-24 05:20:53.177368 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:20:53.177379 | orchestrator | Tuesday 24 March 2026 05:20:36 +0000 (0:00:00.746) 0:31:17.135 ********* 2026-03-24 05:20:53.177390 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177401 | orchestrator | 2026-03-24 05:20:53.177412 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:20:53.177422 | orchestrator | Tuesday 24 March 2026 05:20:36 +0000 (0:00:00.759) 0:31:17.894 ********* 2026-03-24 05:20:53.177432 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177441 | orchestrator | 2026-03-24 05:20:53.177451 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:20:53.177477 | orchestrator | Tuesday 24 March 2026 05:20:37 +0000 (0:00:00.757) 0:31:18.652 ********* 2026-03-24 05:20:53.177487 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177497 | orchestrator | 2026-03-24 05:20:53.177507 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:20:53.177522 | orchestrator | Tuesday 24 March 2026 05:20:38 +0000 (0:00:00.767) 0:31:19.420 ********* 2026-03-24 05:20:53.177539 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177557 | orchestrator | 2026-03-24 05:20:53.177580 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:20:53.177612 | orchestrator | Tuesday 24 March 2026 05:20:39 +0000 (0:00:00.759) 0:31:20.179 ********* 2026-03-24 05:20:53.177628 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177643 | orchestrator | 2026-03-24 05:20:53.177660 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:20:53.177678 | orchestrator | Tuesday 24 March 2026 05:20:40 +0000 (0:00:00.798) 0:31:20.978 ********* 2026-03-24 05:20:53.177693 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177709 | orchestrator | 2026-03-24 05:20:53.177727 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:20:53.177743 | orchestrator | Tuesday 24 March 2026 05:20:40 +0000 (0:00:00.792) 0:31:21.770 ********* 2026-03-24 05:20:53.177761 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177779 | orchestrator | 2026-03-24 05:20:53.177796 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:20:53.177813 | orchestrator | Tuesday 24 March 2026 05:20:41 +0000 (0:00:00.763) 0:31:22.534 ********* 2026-03-24 05:20:53.177823 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177832 | orchestrator | 2026-03-24 05:20:53.177868 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:20:53.177880 | orchestrator | Tuesday 24 March 2026 05:20:42 +0000 (0:00:00.742) 0:31:23.276 ********* 2026-03-24 05:20:53.177890 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177900 | orchestrator | 2026-03-24 05:20:53.177909 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:20:53.177919 | orchestrator | Tuesday 24 March 2026 05:20:43 +0000 (0:00:00.774) 0:31:24.051 ********* 2026-03-24 05:20:53.177929 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177938 | orchestrator | 2026-03-24 05:20:53.177948 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:20:53.177958 | orchestrator | Tuesday 24 March 2026 05:20:43 +0000 (0:00:00.766) 0:31:24.818 ********* 2026-03-24 05:20:53.177967 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.177977 | orchestrator | 2026-03-24 05:20:53.177986 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:20:53.177996 | orchestrator | Tuesday 24 March 2026 05:20:44 +0000 (0:00:00.880) 0:31:25.699 ********* 2026-03-24 05:20:53.178005 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178015 | orchestrator | 2026-03-24 05:20:53.178106 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:20:53.178116 | orchestrator | Tuesday 24 March 2026 05:20:45 +0000 (0:00:00.812) 0:31:26.511 ********* 2026-03-24 05:20:53.178126 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178136 | orchestrator | 2026-03-24 05:20:53.178146 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:20:53.178155 | orchestrator | Tuesday 24 March 2026 05:20:46 +0000 (0:00:00.856) 0:31:27.367 ********* 2026-03-24 05:20:53.178165 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178175 | orchestrator | 2026-03-24 05:20:53.178184 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:20:53.178194 | orchestrator | Tuesday 24 March 2026 05:20:47 +0000 (0:00:00.748) 0:31:28.116 ********* 2026-03-24 05:20:53.178204 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178213 | orchestrator | 2026-03-24 05:20:53.178223 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:20:53.178242 | orchestrator | Tuesday 24 March 2026 05:20:47 +0000 (0:00:00.755) 0:31:28.872 ********* 2026-03-24 05:20:53.178252 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178262 | orchestrator | 2026-03-24 05:20:53.178271 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:20:53.178281 | orchestrator | Tuesday 24 March 2026 05:20:48 +0000 (0:00:00.756) 0:31:29.629 ********* 2026-03-24 05:20:53.178291 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178310 | orchestrator | 2026-03-24 05:20:53.178320 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:20:53.178329 | orchestrator | Tuesday 24 March 2026 05:20:49 +0000 (0:00:00.763) 0:31:30.392 ********* 2026-03-24 05:20:53.178339 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178349 | orchestrator | 2026-03-24 05:20:53.178359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:20:53.178368 | orchestrator | Tuesday 24 March 2026 05:20:50 +0000 (0:00:00.787) 0:31:31.180 ********* 2026-03-24 05:20:53.178378 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178388 | orchestrator | 2026-03-24 05:20:53.178397 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:20:53.178407 | orchestrator | Tuesday 24 March 2026 05:20:51 +0000 (0:00:00.754) 0:31:31.934 ********* 2026-03-24 05:20:53.178417 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:20:53.178427 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:20:53.178437 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:20:53.178446 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:20:53.178456 | orchestrator | 2026-03-24 05:20:53.178466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:20:53.178475 | orchestrator | Tuesday 24 March 2026 05:20:52 +0000 (0:00:01.071) 0:31:33.006 ********* 2026-03-24 05:20:53.178485 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:20:53.178506 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:21:49.382423 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:21:49.382534 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.382550 | orchestrator | 2026-03-24 05:21:49.382563 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:21:49.382576 | orchestrator | Tuesday 24 March 2026 05:20:53 +0000 (0:00:01.059) 0:31:34.065 ********* 2026-03-24 05:21:49.382587 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-24 05:21:49.382598 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-24 05:21:49.382609 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-24 05:21:49.382620 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.382630 | orchestrator | 2026-03-24 05:21:49.382655 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:21:49.382667 | orchestrator | Tuesday 24 March 2026 05:20:54 +0000 (0:00:01.017) 0:31:35.082 ********* 2026-03-24 05:21:49.382677 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.382693 | orchestrator | 2026-03-24 05:21:49.382714 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:21:49.382733 | orchestrator | Tuesday 24 March 2026 05:20:54 +0000 (0:00:00.811) 0:31:35.894 ********* 2026-03-24 05:21:49.382753 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-24 05:21:49.382772 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.382793 | orchestrator | 2026-03-24 05:21:49.382814 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:21:49.382834 | orchestrator | Tuesday 24 March 2026 05:20:55 +0000 (0:00:00.876) 0:31:36.770 ********* 2026-03-24 05:21:49.382855 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.382867 | orchestrator | 2026-03-24 05:21:49.382878 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:21:49.382916 | orchestrator | Tuesday 24 March 2026 05:20:57 +0000 (0:00:01.477) 0:31:38.248 ********* 2026-03-24 05:21:49.382928 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:21:49.382939 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:21:49.382951 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-24 05:21:49.382961 | orchestrator | 2026-03-24 05:21:49.383002 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-24 05:21:49.383015 | orchestrator | Tuesday 24 March 2026 05:20:58 +0000 (0:00:01.603) 0:31:39.852 ********* 2026-03-24 05:21:49.383028 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-03-24 05:21:49.383040 | orchestrator | 2026-03-24 05:21:49.383053 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-24 05:21:49.383065 | orchestrator | Tuesday 24 March 2026 05:21:00 +0000 (0:00:01.091) 0:31:40.943 ********* 2026-03-24 05:21:49.383077 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.383090 | orchestrator | 2026-03-24 05:21:49.383103 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-24 05:21:49.383116 | orchestrator | Tuesday 24 March 2026 05:21:01 +0000 (0:00:01.517) 0:31:42.461 ********* 2026-03-24 05:21:49.383128 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.383141 | orchestrator | 2026-03-24 05:21:49.383153 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-24 05:21:49.383165 | orchestrator | Tuesday 24 March 2026 05:21:02 +0000 (0:00:01.115) 0:31:43.576 ********* 2026-03-24 05:21:49.383178 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:21:49.383190 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:21:49.383202 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:21:49.383214 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-03-24 05:21:49.383226 | orchestrator | 2026-03-24 05:21:49.383238 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-24 05:21:49.383265 | orchestrator | Tuesday 24 March 2026 05:21:10 +0000 (0:00:07.342) 0:31:50.918 ********* 2026-03-24 05:21:49.383278 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.383290 | orchestrator | 2026-03-24 05:21:49.383302 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-24 05:21:49.383314 | orchestrator | Tuesday 24 March 2026 05:21:11 +0000 (0:00:01.199) 0:31:52.118 ********* 2026-03-24 05:21:49.383327 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-24 05:21:49.383339 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-24 05:21:49.383350 | orchestrator | 2026-03-24 05:21:49.383360 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:21:49.383371 | orchestrator | Tuesday 24 March 2026 05:21:14 +0000 (0:00:03.227) 0:31:55.345 ********* 2026-03-24 05:21:49.383382 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-24 05:21:49.383392 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-24 05:21:49.383403 | orchestrator | 2026-03-24 05:21:49.383414 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-24 05:21:49.383424 | orchestrator | Tuesday 24 March 2026 05:21:16 +0000 (0:00:02.000) 0:31:57.345 ********* 2026-03-24 05:21:49.383435 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.383446 | orchestrator | 2026-03-24 05:21:49.383457 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-24 05:21:49.383467 | orchestrator | Tuesday 24 March 2026 05:21:17 +0000 (0:00:01.545) 0:31:58.891 ********* 2026-03-24 05:21:49.383479 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.383490 | orchestrator | 2026-03-24 05:21:49.383501 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-24 05:21:49.383512 | orchestrator | Tuesday 24 March 2026 05:21:18 +0000 (0:00:00.772) 0:31:59.663 ********* 2026-03-24 05:21:49.383522 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.383533 | orchestrator | 2026-03-24 05:21:49.383544 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-24 05:21:49.383572 | orchestrator | Tuesday 24 March 2026 05:21:19 +0000 (0:00:00.742) 0:32:00.406 ********* 2026-03-24 05:21:49.383583 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-03-24 05:21:49.383594 | orchestrator | 2026-03-24 05:21:49.383613 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-24 05:21:49.383633 | orchestrator | Tuesday 24 March 2026 05:21:20 +0000 (0:00:01.191) 0:32:01.597 ********* 2026-03-24 05:21:49.383652 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.383671 | orchestrator | 2026-03-24 05:21:49.383690 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-24 05:21:49.383707 | orchestrator | Tuesday 24 March 2026 05:21:21 +0000 (0:00:01.127) 0:32:02.724 ********* 2026-03-24 05:21:49.383725 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.383745 | orchestrator | 2026-03-24 05:21:49.383766 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-24 05:21:49.383778 | orchestrator | Tuesday 24 March 2026 05:21:22 +0000 (0:00:01.134) 0:32:03.859 ********* 2026-03-24 05:21:49.383788 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-03-24 05:21:49.383799 | orchestrator | 2026-03-24 05:21:49.383810 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-24 05:21:49.383824 | orchestrator | Tuesday 24 March 2026 05:21:24 +0000 (0:00:01.241) 0:32:05.101 ********* 2026-03-24 05:21:49.383842 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.383860 | orchestrator | 2026-03-24 05:21:49.383878 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-24 05:21:49.383920 | orchestrator | Tuesday 24 March 2026 05:21:26 +0000 (0:00:01.964) 0:32:07.065 ********* 2026-03-24 05:21:49.383940 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.383959 | orchestrator | 2026-03-24 05:21:49.383977 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-24 05:21:49.383996 | orchestrator | Tuesday 24 March 2026 05:21:28 +0000 (0:00:01.994) 0:32:09.060 ********* 2026-03-24 05:21:49.384016 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.384033 | orchestrator | 2026-03-24 05:21:49.384052 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-24 05:21:49.384063 | orchestrator | Tuesday 24 March 2026 05:21:30 +0000 (0:00:02.433) 0:32:11.493 ********* 2026-03-24 05:21:49.384073 | orchestrator | changed: [testbed-node-2] 2026-03-24 05:21:49.384084 | orchestrator | 2026-03-24 05:21:49.384095 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-24 05:21:49.384106 | orchestrator | Tuesday 24 March 2026 05:21:33 +0000 (0:00:03.391) 0:32:14.885 ********* 2026-03-24 05:21:49.384116 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-24 05:21:49.384127 | orchestrator | 2026-03-24 05:21:49.384137 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-24 05:21:49.384148 | orchestrator | Tuesday 24 March 2026 05:21:35 +0000 (0:00:01.560) 0:32:16.445 ********* 2026-03-24 05:21:49.384158 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:21:49.384169 | orchestrator | 2026-03-24 05:21:49.384180 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-24 05:21:49.384190 | orchestrator | Tuesday 24 March 2026 05:21:37 +0000 (0:00:02.414) 0:32:18.859 ********* 2026-03-24 05:21:49.384201 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:21:49.384211 | orchestrator | 2026-03-24 05:21:49.384222 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-24 05:21:49.384233 | orchestrator | Tuesday 24 March 2026 05:21:40 +0000 (0:00:02.314) 0:32:21.174 ********* 2026-03-24 05:21:49.384243 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.384254 | orchestrator | 2026-03-24 05:21:49.384264 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-24 05:21:49.384275 | orchestrator | Tuesday 24 March 2026 05:21:41 +0000 (0:00:01.321) 0:32:22.495 ********* 2026-03-24 05:21:49.384285 | orchestrator | ok: [testbed-node-2] 2026-03-24 05:21:49.384311 | orchestrator | 2026-03-24 05:21:49.384323 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-24 05:21:49.384352 | orchestrator | Tuesday 24 March 2026 05:21:42 +0000 (0:00:01.132) 0:32:23.628 ********* 2026-03-24 05:21:49.384372 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-24 05:21:49.384383 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-24 05:21:49.384394 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.384405 | orchestrator | 2026-03-24 05:21:49.384415 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-24 05:21:49.384426 | orchestrator | Tuesday 24 March 2026 05:21:44 +0000 (0:00:01.610) 0:32:25.239 ********* 2026-03-24 05:21:49.384437 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-24 05:21:49.384447 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-24 05:21:49.384458 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-24 05:21:49.384469 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-24 05:21:49.384479 | orchestrator | skipping: [testbed-node-2] 2026-03-24 05:21:49.384490 | orchestrator | 2026-03-24 05:21:49.384501 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-03-24 05:21:49.384511 | orchestrator | 2026-03-24 05:21:49.384522 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:21:49.384532 | orchestrator | Tuesday 24 March 2026 05:21:46 +0000 (0:00:01.925) 0:32:27.165 ********* 2026-03-24 05:21:49.384543 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:21:49.384554 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:21:49.384564 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:21:49.384575 | orchestrator | 2026-03-24 05:21:49.384585 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:21:49.384596 | orchestrator | Tuesday 24 March 2026 05:21:47 +0000 (0:00:01.610) 0:32:28.775 ********* 2026-03-24 05:21:49.384607 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:21:49.384617 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:21:49.384628 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:21:49.384639 | orchestrator | 2026-03-24 05:21:49.384659 | orchestrator | TASK [Get pool list] *********************************************************** 2026-03-24 05:21:56.012329 | orchestrator | Tuesday 24 March 2026 05:21:49 +0000 (0:00:01.491) 0:32:30.266 ********* 2026-03-24 05:21:56.012403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:21:56.012411 | orchestrator | 2026-03-24 05:21:56.012416 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-03-24 05:21:56.012421 | orchestrator | Tuesday 24 March 2026 05:21:52 +0000 (0:00:03.061) 0:32:33.328 ********* 2026-03-24 05:21:56.012426 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:21:56.012430 | orchestrator | 2026-03-24 05:21:56.012435 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-03-24 05:21:56.012439 | orchestrator | Tuesday 24 March 2026 05:21:55 +0000 (0:00:03.027) 0:32:36.355 ********* 2026-03-24 05:21:56.012448 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-03-24T02:50:33.008790+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.012493 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-03-24T02:51:38.314039+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '33', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '29', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.012499 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-03-24T02:51:42.261754+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '33', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.012514 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-03-24T02:52:38.691678+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '47', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '42', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.423369 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-03-24T02:52:44.112722+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '47', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '44', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.423495 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-03-24T02:52:50.317308+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '71', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '59', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.423524 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-03-24T02:52:56.580544+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '179', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '61', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.423542 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-03-24T02:53:02.864548+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '71', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '61', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:56.423557 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-03-24T02:53:14.562601+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '71', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '63', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:58.157082 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-03-24T02:53:57.095211+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '70', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 70, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:58.157161 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-03-24T02:54:06.281136+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '78', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 78, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:58.157205 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-03-24T02:54:15.402092+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '191', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 191, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:58.157213 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-03-24T02:54:24.635521+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '93', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 93, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:21:58.157232 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-03-24T02:54:33.932617+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '101', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 101, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-24 05:23:44.576444 | orchestrator | 2026-03-24 05:23:44.576587 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-03-24 05:23:44.576618 | orchestrator | Tuesday 24 March 2026 05:21:58 +0000 (0:00:02.696) 0:32:39.051 ********* 2026-03-24 05:23:44.576638 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:23:44.576657 | orchestrator | 2026-03-24 05:23:44.576675 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-03-24 05:23:44.576694 | orchestrator | Tuesday 24 March 2026 05:22:01 +0000 (0:00:03.232) 0:32:42.283 ********* 2026-03-24 05:23:44.576715 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-24 05:23:44.576738 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-24 05:23:44.576759 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-24 05:23:44.576779 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-24 05:23:44.576833 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-24 05:23:44.576855 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-24 05:23:44.576874 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-24 05:23:44.576894 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-24 05:23:44.576914 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-24 05:23:44.576935 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-24 05:23:44.576955 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-24 05:23:44.577004 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-24 05:23:44.577025 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-24 05:23:44.577044 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-24 05:23:44.577061 | orchestrator | 2026-03-24 05:23:44.577078 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-03-24 05:23:44.577094 | orchestrator | Tuesday 24 March 2026 05:23:15 +0000 (0:01:14.499) 0:33:56.783 ********* 2026-03-24 05:23:44.577114 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-24 05:23:44.577135 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-24 05:23:44.577154 | orchestrator | 2026-03-24 05:23:44.577172 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-24 05:23:44.577188 | orchestrator | 2026-03-24 05:23:44.577204 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:23:44.577221 | orchestrator | Tuesday 24 March 2026 05:23:22 +0000 (0:00:06.157) 0:34:02.941 ********* 2026-03-24 05:23:44.577256 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-24 05:23:44.577273 | orchestrator | 2026-03-24 05:23:44.577289 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:23:44.577304 | orchestrator | Tuesday 24 March 2026 05:23:23 +0000 (0:00:01.132) 0:34:04.073 ********* 2026-03-24 05:23:44.577321 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.577339 | orchestrator | 2026-03-24 05:23:44.577356 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:23:44.577375 | orchestrator | Tuesday 24 March 2026 05:23:24 +0000 (0:00:01.525) 0:34:05.598 ********* 2026-03-24 05:23:44.577391 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.577408 | orchestrator | 2026-03-24 05:23:44.577427 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:23:44.577445 | orchestrator | Tuesday 24 March 2026 05:23:25 +0000 (0:00:01.143) 0:34:06.742 ********* 2026-03-24 05:23:44.577461 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.577476 | orchestrator | 2026-03-24 05:23:44.577493 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:23:44.577511 | orchestrator | Tuesday 24 March 2026 05:23:27 +0000 (0:00:01.464) 0:34:08.207 ********* 2026-03-24 05:23:44.577528 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.577544 | orchestrator | 2026-03-24 05:23:44.577562 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:23:44.577579 | orchestrator | Tuesday 24 March 2026 05:23:28 +0000 (0:00:01.114) 0:34:09.322 ********* 2026-03-24 05:23:44.577595 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.577611 | orchestrator | 2026-03-24 05:23:44.577627 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:23:44.577662 | orchestrator | Tuesday 24 March 2026 05:23:29 +0000 (0:00:01.111) 0:34:10.433 ********* 2026-03-24 05:23:44.577678 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.577693 | orchestrator | 2026-03-24 05:23:44.577709 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:23:44.577726 | orchestrator | Tuesday 24 March 2026 05:23:30 +0000 (0:00:01.122) 0:34:11.555 ********* 2026-03-24 05:23:44.577742 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:23:44.577757 | orchestrator | 2026-03-24 05:23:44.577774 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:23:44.577819 | orchestrator | Tuesday 24 March 2026 05:23:31 +0000 (0:00:01.136) 0:34:12.692 ********* 2026-03-24 05:23:44.577837 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.577853 | orchestrator | 2026-03-24 05:23:44.577870 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:23:44.577886 | orchestrator | Tuesday 24 March 2026 05:23:32 +0000 (0:00:01.111) 0:34:13.804 ********* 2026-03-24 05:23:44.577903 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:23:44.577921 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:23:44.577938 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:23:44.578189 | orchestrator | 2026-03-24 05:23:44.578223 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:23:44.578243 | orchestrator | Tuesday 24 March 2026 05:23:34 +0000 (0:00:01.647) 0:34:15.452 ********* 2026-03-24 05:23:44.578261 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:23:44.578278 | orchestrator | 2026-03-24 05:23:44.578297 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:23:44.578316 | orchestrator | Tuesday 24 March 2026 05:23:35 +0000 (0:00:01.216) 0:34:16.668 ********* 2026-03-24 05:23:44.578335 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:23:44.578351 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:23:44.578367 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:23:44.578382 | orchestrator | 2026-03-24 05:23:44.578398 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:23:44.578415 | orchestrator | Tuesday 24 March 2026 05:23:38 +0000 (0:00:03.119) 0:34:19.788 ********* 2026-03-24 05:23:44.578432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 05:23:44.578448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 05:23:44.578465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 05:23:44.578482 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:23:44.578500 | orchestrator | 2026-03-24 05:23:44.578520 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:23:44.578539 | orchestrator | Tuesday 24 March 2026 05:23:40 +0000 (0:00:01.421) 0:34:21.210 ********* 2026-03-24 05:23:44.578560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:23:44.578582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:23:44.578601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:23:44.578617 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:23:44.578633 | orchestrator | 2026-03-24 05:23:44.578683 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:23:44.578702 | orchestrator | Tuesday 24 March 2026 05:23:42 +0000 (0:00:01.875) 0:34:23.085 ********* 2026-03-24 05:23:44.578723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:23:44.578745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:23:44.578764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:23:44.578783 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:23:44.578799 | orchestrator | 2026-03-24 05:23:44.578817 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:23:44.578835 | orchestrator | Tuesday 24 March 2026 05:23:43 +0000 (0:00:01.137) 0:34:24.223 ********* 2026-03-24 05:23:44.578879 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:23:36.312839', 'end': '2026-03-24 05:23:36.356992', 'delta': '0:00:00.044153', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:24:02.539681 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:23:36.884913', 'end': '2026-03-24 05:23:36.931665', 'delta': '0:00:00.046752', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:24:02.539799 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:23:37.692469', 'end': '2026-03-24 05:23:37.742123', 'delta': '0:00:00.049654', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:24:02.539841 | orchestrator | 2026-03-24 05:24:02.539856 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:24:02.539869 | orchestrator | Tuesday 24 March 2026 05:23:44 +0000 (0:00:01.243) 0:34:25.466 ********* 2026-03-24 05:24:02.539881 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:02.539908 | orchestrator | 2026-03-24 05:24:02.539935 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:24:02.539946 | orchestrator | Tuesday 24 March 2026 05:23:46 +0000 (0:00:01.592) 0:34:27.059 ********* 2026-03-24 05:24:02.539958 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:02.540003 | orchestrator | 2026-03-24 05:24:02.540016 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:24:02.540027 | orchestrator | Tuesday 24 March 2026 05:23:47 +0000 (0:00:01.538) 0:34:28.597 ********* 2026-03-24 05:24:02.540038 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:02.540049 | orchestrator | 2026-03-24 05:24:02.540060 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:24:02.540070 | orchestrator | Tuesday 24 March 2026 05:23:48 +0000 (0:00:01.112) 0:34:29.710 ********* 2026-03-24 05:24:02.540081 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:24:02.540092 | orchestrator | 2026-03-24 05:24:02.540103 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:24:02.540114 | orchestrator | Tuesday 24 March 2026 05:23:50 +0000 (0:00:02.003) 0:34:31.714 ********* 2026-03-24 05:24:02.540124 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:02.540139 | orchestrator | 2026-03-24 05:24:02.540158 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:24:02.540177 | orchestrator | Tuesday 24 March 2026 05:23:51 +0000 (0:00:01.138) 0:34:32.853 ********* 2026-03-24 05:24:02.540195 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:02.540214 | orchestrator | 2026-03-24 05:24:02.540232 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:24:02.540251 | orchestrator | Tuesday 24 March 2026 05:23:53 +0000 (0:00:01.114) 0:34:33.967 ********* 2026-03-24 05:24:02.540269 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:02.540288 | orchestrator | 2026-03-24 05:24:02.540307 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:24:02.540328 | orchestrator | Tuesday 24 March 2026 05:23:54 +0000 (0:00:01.219) 0:34:35.187 ********* 2026-03-24 05:24:02.540347 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:02.540367 | orchestrator | 2026-03-24 05:24:02.540386 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:24:02.540405 | orchestrator | Tuesday 24 March 2026 05:23:55 +0000 (0:00:01.120) 0:34:36.308 ********* 2026-03-24 05:24:02.540418 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:02.540431 | orchestrator | 2026-03-24 05:24:02.540444 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:24:02.540457 | orchestrator | Tuesday 24 March 2026 05:23:56 +0000 (0:00:01.122) 0:34:37.430 ********* 2026-03-24 05:24:02.540470 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:02.540482 | orchestrator | 2026-03-24 05:24:02.540495 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:24:02.540508 | orchestrator | Tuesday 24 March 2026 05:23:57 +0000 (0:00:01.167) 0:34:38.598 ********* 2026-03-24 05:24:02.540520 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:02.540532 | orchestrator | 2026-03-24 05:24:02.540546 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:24:02.540559 | orchestrator | Tuesday 24 March 2026 05:23:58 +0000 (0:00:01.157) 0:34:39.755 ********* 2026-03-24 05:24:02.540570 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:02.540581 | orchestrator | 2026-03-24 05:24:02.540592 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:24:02.540616 | orchestrator | Tuesday 24 March 2026 05:24:00 +0000 (0:00:01.154) 0:34:40.910 ********* 2026-03-24 05:24:02.540646 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:02.540658 | orchestrator | 2026-03-24 05:24:02.540669 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:24:02.540681 | orchestrator | Tuesday 24 March 2026 05:24:01 +0000 (0:00:01.128) 0:34:42.039 ********* 2026-03-24 05:24:02.540691 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:02.540702 | orchestrator | 2026-03-24 05:24:02.540713 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:24:02.540724 | orchestrator | Tuesday 24 March 2026 05:24:02 +0000 (0:00:01.163) 0:34:43.203 ********* 2026-03-24 05:24:02.540736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:02.540751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}})  2026-03-24 05:24:02.540771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:24:02.540784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}})  2026-03-24 05:24:02.540797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:02.540809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:02.540835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:24:03.917080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:03.917186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:24:03.917205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:03.917236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}})  2026-03-24 05:24:03.917251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}})  2026-03-24 05:24:03.917263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:03.917318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:24:03.917338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:03.917351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:24:03.917364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:24:03.917376 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:03.917390 | orchestrator | 2026-03-24 05:24:03.917410 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:24:03.917422 | orchestrator | Tuesday 24 March 2026 05:24:03 +0000 (0:00:01.360) 0:34:44.563 ********* 2026-03-24 05:24:03.917435 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:03.917456 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.072876 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073024 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073093 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073118 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073131 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:05.073143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:24.394206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:24.394396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:24.394430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:24.394477 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:24:24.394497 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.394520 | orchestrator | 2026-03-24 05:24:24.394542 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:24:24.394558 | orchestrator | Tuesday 24 March 2026 05:24:05 +0000 (0:00:01.396) 0:34:45.960 ********* 2026-03-24 05:24:24.394571 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:24.394584 | orchestrator | 2026-03-24 05:24:24.394597 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:24:24.394609 | orchestrator | Tuesday 24 March 2026 05:24:06 +0000 (0:00:01.538) 0:34:47.499 ********* 2026-03-24 05:24:24.394621 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:24.394633 | orchestrator | 2026-03-24 05:24:24.394645 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:24:24.394657 | orchestrator | Tuesday 24 March 2026 05:24:07 +0000 (0:00:01.108) 0:34:48.607 ********* 2026-03-24 05:24:24.394670 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:24.394682 | orchestrator | 2026-03-24 05:24:24.394694 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:24:24.394716 | orchestrator | Tuesday 24 March 2026 05:24:09 +0000 (0:00:01.511) 0:34:50.119 ********* 2026-03-24 05:24:24.394729 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.394740 | orchestrator | 2026-03-24 05:24:24.394753 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:24:24.394765 | orchestrator | Tuesday 24 March 2026 05:24:10 +0000 (0:00:01.115) 0:34:51.234 ********* 2026-03-24 05:24:24.394788 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.394800 | orchestrator | 2026-03-24 05:24:24.394817 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:24:24.394836 | orchestrator | Tuesday 24 March 2026 05:24:11 +0000 (0:00:01.268) 0:34:52.503 ********* 2026-03-24 05:24:24.394855 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.394874 | orchestrator | 2026-03-24 05:24:24.394893 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:24:24.394910 | orchestrator | Tuesday 24 March 2026 05:24:12 +0000 (0:00:01.144) 0:34:53.648 ********* 2026-03-24 05:24:24.394929 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-24 05:24:24.394947 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-24 05:24:24.394965 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-24 05:24:24.395050 | orchestrator | 2026-03-24 05:24:24.395073 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:24:24.395094 | orchestrator | Tuesday 24 March 2026 05:24:14 +0000 (0:00:01.971) 0:34:55.620 ********* 2026-03-24 05:24:24.395113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 05:24:24.395134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 05:24:24.395154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 05:24:24.395175 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.395195 | orchestrator | 2026-03-24 05:24:24.395216 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:24:24.395235 | orchestrator | Tuesday 24 March 2026 05:24:15 +0000 (0:00:01.130) 0:34:56.751 ********* 2026-03-24 05:24:24.395247 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-24 05:24:24.395258 | orchestrator | 2026-03-24 05:24:24.395270 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:24:24.395282 | orchestrator | Tuesday 24 March 2026 05:24:16 +0000 (0:00:01.136) 0:34:57.888 ********* 2026-03-24 05:24:24.395292 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.395303 | orchestrator | 2026-03-24 05:24:24.395314 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:24:24.395324 | orchestrator | Tuesday 24 March 2026 05:24:18 +0000 (0:00:01.187) 0:34:59.076 ********* 2026-03-24 05:24:24.395335 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.395345 | orchestrator | 2026-03-24 05:24:24.395356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:24:24.395367 | orchestrator | Tuesday 24 March 2026 05:24:19 +0000 (0:00:01.113) 0:35:00.189 ********* 2026-03-24 05:24:24.395377 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.395388 | orchestrator | 2026-03-24 05:24:24.395398 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:24:24.395409 | orchestrator | Tuesday 24 March 2026 05:24:20 +0000 (0:00:01.121) 0:35:01.311 ********* 2026-03-24 05:24:24.395420 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:24:24.395430 | orchestrator | 2026-03-24 05:24:24.395441 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:24:24.395452 | orchestrator | Tuesday 24 March 2026 05:24:21 +0000 (0:00:01.209) 0:35:02.520 ********* 2026-03-24 05:24:24.395462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:24:24.395473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:24:24.395484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:24:24.395495 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.395505 | orchestrator | 2026-03-24 05:24:24.395516 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:24:24.395526 | orchestrator | Tuesday 24 March 2026 05:24:22 +0000 (0:00:01.377) 0:35:03.897 ********* 2026-03-24 05:24:24.395537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:24:24.395559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:24:24.395571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:24:24.395581 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:24:24.395592 | orchestrator | 2026-03-24 05:24:24.395617 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:25:11.325387 | orchestrator | Tuesday 24 March 2026 05:24:24 +0000 (0:00:01.383) 0:35:05.281 ********* 2026-03-24 05:25:11.325533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:25:11.325568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:25:11.343153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:25:11.343243 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.343255 | orchestrator | 2026-03-24 05:25:11.343266 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:25:11.343276 | orchestrator | Tuesday 24 March 2026 05:24:25 +0000 (0:00:01.402) 0:35:06.684 ********* 2026-03-24 05:25:11.343284 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.343294 | orchestrator | 2026-03-24 05:25:11.343302 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:25:11.343311 | orchestrator | Tuesday 24 March 2026 05:24:26 +0000 (0:00:01.126) 0:35:07.810 ********* 2026-03-24 05:25:11.343319 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:25:11.343327 | orchestrator | 2026-03-24 05:25:11.343355 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:25:11.343364 | orchestrator | Tuesday 24 March 2026 05:24:28 +0000 (0:00:01.321) 0:35:09.132 ********* 2026-03-24 05:25:11.343389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:25:11.343399 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:25:11.343407 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:25:11.343415 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 05:25:11.343423 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:25:11.343432 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:25:11.343440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:25:11.343448 | orchestrator | 2026-03-24 05:25:11.343455 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:25:11.343464 | orchestrator | Tuesday 24 March 2026 05:24:30 +0000 (0:00:02.078) 0:35:11.211 ********* 2026-03-24 05:25:11.343471 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:25:11.343480 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:25:11.343487 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:25:11.343495 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 05:25:11.343503 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:25:11.343511 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:25:11.343521 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:25:11.343529 | orchestrator | 2026-03-24 05:25:11.343537 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-24 05:25:11.343545 | orchestrator | Tuesday 24 March 2026 05:24:32 +0000 (0:00:02.513) 0:35:13.724 ********* 2026-03-24 05:25:11.343553 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.343561 | orchestrator | 2026-03-24 05:25:11.343569 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-24 05:25:11.343577 | orchestrator | Tuesday 24 March 2026 05:24:34 +0000 (0:00:01.490) 0:35:15.215 ********* 2026-03-24 05:25:11.343609 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.343617 | orchestrator | 2026-03-24 05:25:11.343625 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-24 05:25:11.343633 | orchestrator | Tuesday 24 March 2026 05:24:35 +0000 (0:00:01.158) 0:35:16.373 ********* 2026-03-24 05:25:11.343641 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.343649 | orchestrator | 2026-03-24 05:25:11.343657 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-24 05:25:11.343664 | orchestrator | Tuesday 24 March 2026 05:24:37 +0000 (0:00:01.543) 0:35:17.916 ********* 2026-03-24 05:25:11.343672 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-24 05:25:11.343680 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-24 05:25:11.343688 | orchestrator | 2026-03-24 05:25:11.343696 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:25:11.343704 | orchestrator | Tuesday 24 March 2026 05:24:41 +0000 (0:00:04.229) 0:35:22.146 ********* 2026-03-24 05:25:11.343712 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-24 05:25:11.343720 | orchestrator | 2026-03-24 05:25:11.343728 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:25:11.343744 | orchestrator | Tuesday 24 March 2026 05:24:42 +0000 (0:00:01.110) 0:35:23.257 ********* 2026-03-24 05:25:11.343752 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-24 05:25:11.343760 | orchestrator | 2026-03-24 05:25:11.343767 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:25:11.343775 | orchestrator | Tuesday 24 March 2026 05:24:43 +0000 (0:00:01.100) 0:35:24.357 ********* 2026-03-24 05:25:11.343783 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.343791 | orchestrator | 2026-03-24 05:25:11.343799 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:25:11.343806 | orchestrator | Tuesday 24 March 2026 05:24:44 +0000 (0:00:01.095) 0:35:25.453 ********* 2026-03-24 05:25:11.343814 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.343822 | orchestrator | 2026-03-24 05:25:11.343830 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:25:11.343864 | orchestrator | Tuesday 24 March 2026 05:24:46 +0000 (0:00:01.572) 0:35:27.025 ********* 2026-03-24 05:25:11.343873 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.343881 | orchestrator | 2026-03-24 05:25:11.343889 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:25:11.343897 | orchestrator | Tuesday 24 March 2026 05:24:47 +0000 (0:00:01.514) 0:35:28.539 ********* 2026-03-24 05:25:11.343905 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.343912 | orchestrator | 2026-03-24 05:25:11.343920 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:25:11.343928 | orchestrator | Tuesday 24 March 2026 05:24:49 +0000 (0:00:01.544) 0:35:30.083 ********* 2026-03-24 05:25:11.343936 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.343944 | orchestrator | 2026-03-24 05:25:11.343952 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:25:11.343960 | orchestrator | Tuesday 24 March 2026 05:24:50 +0000 (0:00:01.121) 0:35:31.205 ********* 2026-03-24 05:25:11.343967 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.343975 | orchestrator | 2026-03-24 05:25:11.343983 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:25:11.343991 | orchestrator | Tuesday 24 March 2026 05:24:51 +0000 (0:00:01.106) 0:35:32.311 ********* 2026-03-24 05:25:11.344003 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344033 | orchestrator | 2026-03-24 05:25:11.344041 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:25:11.344049 | orchestrator | Tuesday 24 March 2026 05:24:52 +0000 (0:00:01.102) 0:35:33.413 ********* 2026-03-24 05:25:11.344057 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.344071 | orchestrator | 2026-03-24 05:25:11.344079 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:25:11.344087 | orchestrator | Tuesday 24 March 2026 05:24:54 +0000 (0:00:01.530) 0:35:34.944 ********* 2026-03-24 05:25:11.344095 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.344103 | orchestrator | 2026-03-24 05:25:11.344111 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:25:11.344119 | orchestrator | Tuesday 24 March 2026 05:24:55 +0000 (0:00:01.512) 0:35:36.457 ********* 2026-03-24 05:25:11.344126 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344134 | orchestrator | 2026-03-24 05:25:11.344142 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:25:11.344150 | orchestrator | Tuesday 24 March 2026 05:24:56 +0000 (0:00:01.088) 0:35:37.545 ********* 2026-03-24 05:25:11.344158 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344165 | orchestrator | 2026-03-24 05:25:11.344173 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:25:11.344181 | orchestrator | Tuesday 24 March 2026 05:24:57 +0000 (0:00:01.107) 0:35:38.652 ********* 2026-03-24 05:25:11.344189 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.344197 | orchestrator | 2026-03-24 05:25:11.344205 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:25:11.344212 | orchestrator | Tuesday 24 March 2026 05:24:58 +0000 (0:00:01.131) 0:35:39.784 ********* 2026-03-24 05:25:11.344220 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.344228 | orchestrator | 2026-03-24 05:25:11.344236 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:25:11.344244 | orchestrator | Tuesday 24 March 2026 05:25:00 +0000 (0:00:01.160) 0:35:40.944 ********* 2026-03-24 05:25:11.344252 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.344259 | orchestrator | 2026-03-24 05:25:11.344267 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:25:11.344275 | orchestrator | Tuesday 24 March 2026 05:25:01 +0000 (0:00:01.101) 0:35:42.046 ********* 2026-03-24 05:25:11.344283 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344291 | orchestrator | 2026-03-24 05:25:11.344299 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:25:11.344307 | orchestrator | Tuesday 24 March 2026 05:25:02 +0000 (0:00:01.091) 0:35:43.137 ********* 2026-03-24 05:25:11.344315 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344329 | orchestrator | 2026-03-24 05:25:11.344341 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:25:11.344353 | orchestrator | Tuesday 24 March 2026 05:25:03 +0000 (0:00:01.102) 0:35:44.239 ********* 2026-03-24 05:25:11.344364 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344377 | orchestrator | 2026-03-24 05:25:11.344389 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:25:11.344400 | orchestrator | Tuesday 24 March 2026 05:25:04 +0000 (0:00:01.094) 0:35:45.334 ********* 2026-03-24 05:25:11.344412 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.344425 | orchestrator | 2026-03-24 05:25:11.344438 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:25:11.344451 | orchestrator | Tuesday 24 March 2026 05:25:05 +0000 (0:00:01.204) 0:35:46.539 ********* 2026-03-24 05:25:11.344464 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:11.344478 | orchestrator | 2026-03-24 05:25:11.344490 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:25:11.344498 | orchestrator | Tuesday 24 March 2026 05:25:06 +0000 (0:00:01.130) 0:35:47.670 ********* 2026-03-24 05:25:11.344505 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344513 | orchestrator | 2026-03-24 05:25:11.344521 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:25:11.344531 | orchestrator | Tuesday 24 March 2026 05:25:07 +0000 (0:00:01.132) 0:35:48.802 ********* 2026-03-24 05:25:11.344546 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344574 | orchestrator | 2026-03-24 05:25:11.344588 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:25:11.344600 | orchestrator | Tuesday 24 March 2026 05:25:09 +0000 (0:00:01.143) 0:35:49.946 ********* 2026-03-24 05:25:11.344612 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344624 | orchestrator | 2026-03-24 05:25:11.344636 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:25:11.344648 | orchestrator | Tuesday 24 March 2026 05:25:10 +0000 (0:00:01.104) 0:35:51.050 ********* 2026-03-24 05:25:11.344661 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:11.344674 | orchestrator | 2026-03-24 05:25:11.344699 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:25:59.078574 | orchestrator | Tuesday 24 March 2026 05:25:11 +0000 (0:00:01.160) 0:35:52.211 ********* 2026-03-24 05:25:59.078695 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.078711 | orchestrator | 2026-03-24 05:25:59.078724 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:25:59.078735 | orchestrator | Tuesday 24 March 2026 05:25:12 +0000 (0:00:01.118) 0:35:53.329 ********* 2026-03-24 05:25:59.078745 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.078755 | orchestrator | 2026-03-24 05:25:59.078765 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:25:59.078775 | orchestrator | Tuesday 24 March 2026 05:25:13 +0000 (0:00:01.111) 0:35:54.441 ********* 2026-03-24 05:25:59.078785 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.078795 | orchestrator | 2026-03-24 05:25:59.078805 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:25:59.078816 | orchestrator | Tuesday 24 March 2026 05:25:14 +0000 (0:00:01.108) 0:35:55.549 ********* 2026-03-24 05:25:59.078826 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.078835 | orchestrator | 2026-03-24 05:25:59.078845 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:25:59.078871 | orchestrator | Tuesday 24 March 2026 05:25:15 +0000 (0:00:01.160) 0:35:56.710 ********* 2026-03-24 05:25:59.078882 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.078892 | orchestrator | 2026-03-24 05:25:59.078901 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:25:59.078911 | orchestrator | Tuesday 24 March 2026 05:25:16 +0000 (0:00:01.109) 0:35:57.820 ********* 2026-03-24 05:25:59.078921 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.078930 | orchestrator | 2026-03-24 05:25:59.078940 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:25:59.078950 | orchestrator | Tuesday 24 March 2026 05:25:18 +0000 (0:00:01.112) 0:35:58.932 ********* 2026-03-24 05:25:59.078960 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.078969 | orchestrator | 2026-03-24 05:25:59.078979 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:25:59.078989 | orchestrator | Tuesday 24 March 2026 05:25:19 +0000 (0:00:01.083) 0:36:00.015 ********* 2026-03-24 05:25:59.078999 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.079008 | orchestrator | 2026-03-24 05:25:59.079018 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:25:59.079028 | orchestrator | Tuesday 24 March 2026 05:25:20 +0000 (0:00:01.096) 0:36:01.112 ********* 2026-03-24 05:25:59.079106 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:59.079119 | orchestrator | 2026-03-24 05:25:59.079132 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:25:59.079144 | orchestrator | Tuesday 24 March 2026 05:25:22 +0000 (0:00:01.935) 0:36:03.048 ********* 2026-03-24 05:25:59.079157 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:59.079169 | orchestrator | 2026-03-24 05:25:59.079181 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:25:59.079194 | orchestrator | Tuesday 24 March 2026 05:25:24 +0000 (0:00:02.648) 0:36:05.696 ********* 2026-03-24 05:25:59.079206 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-24 05:25:59.079243 | orchestrator | 2026-03-24 05:25:59.079256 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:25:59.079271 | orchestrator | Tuesday 24 March 2026 05:25:25 +0000 (0:00:01.141) 0:36:06.838 ********* 2026-03-24 05:25:59.079289 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.079307 | orchestrator | 2026-03-24 05:25:59.079326 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:25:59.079343 | orchestrator | Tuesday 24 March 2026 05:25:27 +0000 (0:00:01.115) 0:36:07.953 ********* 2026-03-24 05:25:59.079360 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.079376 | orchestrator | 2026-03-24 05:25:59.079392 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:25:59.079409 | orchestrator | Tuesday 24 March 2026 05:25:28 +0000 (0:00:01.114) 0:36:09.068 ********* 2026-03-24 05:25:59.079426 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:25:59.079444 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:25:59.079462 | orchestrator | 2026-03-24 05:25:59.079479 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:25:59.079495 | orchestrator | Tuesday 24 March 2026 05:25:29 +0000 (0:00:01.811) 0:36:10.880 ********* 2026-03-24 05:25:59.079513 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:59.079530 | orchestrator | 2026-03-24 05:25:59.079547 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:25:59.079565 | orchestrator | Tuesday 24 March 2026 05:25:31 +0000 (0:00:01.432) 0:36:12.313 ********* 2026-03-24 05:25:59.079581 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.079600 | orchestrator | 2026-03-24 05:25:59.079617 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:25:59.079635 | orchestrator | Tuesday 24 March 2026 05:25:32 +0000 (0:00:01.130) 0:36:13.443 ********* 2026-03-24 05:25:59.079652 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.079671 | orchestrator | 2026-03-24 05:25:59.079687 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:25:59.079705 | orchestrator | Tuesday 24 March 2026 05:25:33 +0000 (0:00:01.117) 0:36:14.561 ********* 2026-03-24 05:25:59.079722 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.079738 | orchestrator | 2026-03-24 05:25:59.079754 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:25:59.079769 | orchestrator | Tuesday 24 March 2026 05:25:34 +0000 (0:00:01.091) 0:36:15.652 ********* 2026-03-24 05:25:59.079785 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-24 05:25:59.079801 | orchestrator | 2026-03-24 05:25:59.079817 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:25:59.079858 | orchestrator | Tuesday 24 March 2026 05:25:35 +0000 (0:00:01.099) 0:36:16.752 ********* 2026-03-24 05:25:59.079877 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:59.079895 | orchestrator | 2026-03-24 05:25:59.079912 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:25:59.079930 | orchestrator | Tuesday 24 March 2026 05:25:37 +0000 (0:00:01.815) 0:36:18.567 ********* 2026-03-24 05:25:59.079948 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:25:59.079965 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:25:59.079982 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:25:59.079999 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080015 | orchestrator | 2026-03-24 05:25:59.080062 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:25:59.080080 | orchestrator | Tuesday 24 March 2026 05:25:38 +0000 (0:00:01.105) 0:36:19.673 ********* 2026-03-24 05:25:59.080116 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080132 | orchestrator | 2026-03-24 05:25:59.080160 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:25:59.080177 | orchestrator | Tuesday 24 March 2026 05:25:39 +0000 (0:00:01.079) 0:36:20.752 ********* 2026-03-24 05:25:59.080193 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080209 | orchestrator | 2026-03-24 05:25:59.080224 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:25:59.080241 | orchestrator | Tuesday 24 March 2026 05:25:40 +0000 (0:00:01.122) 0:36:21.875 ********* 2026-03-24 05:25:59.080256 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080271 | orchestrator | 2026-03-24 05:25:59.080287 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:25:59.080303 | orchestrator | Tuesday 24 March 2026 05:25:42 +0000 (0:00:01.104) 0:36:22.980 ********* 2026-03-24 05:25:59.080319 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080335 | orchestrator | 2026-03-24 05:25:59.080352 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:25:59.080368 | orchestrator | Tuesday 24 March 2026 05:25:43 +0000 (0:00:01.079) 0:36:24.060 ********* 2026-03-24 05:25:59.080385 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080400 | orchestrator | 2026-03-24 05:25:59.080417 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:25:59.080433 | orchestrator | Tuesday 24 March 2026 05:25:44 +0000 (0:00:01.098) 0:36:25.158 ********* 2026-03-24 05:25:59.080449 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:59.080464 | orchestrator | 2026-03-24 05:25:59.080480 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:25:59.080495 | orchestrator | Tuesday 24 March 2026 05:25:46 +0000 (0:00:02.667) 0:36:27.825 ********* 2026-03-24 05:25:59.080511 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:59.080526 | orchestrator | 2026-03-24 05:25:59.080542 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:25:59.080558 | orchestrator | Tuesday 24 March 2026 05:25:48 +0000 (0:00:01.100) 0:36:28.926 ********* 2026-03-24 05:25:59.080574 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-24 05:25:59.080589 | orchestrator | 2026-03-24 05:25:59.080604 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:25:59.080619 | orchestrator | Tuesday 24 March 2026 05:25:49 +0000 (0:00:01.095) 0:36:30.021 ********* 2026-03-24 05:25:59.080635 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080651 | orchestrator | 2026-03-24 05:25:59.080668 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:25:59.080685 | orchestrator | Tuesday 24 March 2026 05:25:50 +0000 (0:00:01.135) 0:36:31.157 ********* 2026-03-24 05:25:59.080702 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080718 | orchestrator | 2026-03-24 05:25:59.080733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:25:59.080750 | orchestrator | Tuesday 24 March 2026 05:25:51 +0000 (0:00:01.137) 0:36:32.294 ********* 2026-03-24 05:25:59.080766 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080781 | orchestrator | 2026-03-24 05:25:59.080796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:25:59.080813 | orchestrator | Tuesday 24 March 2026 05:25:52 +0000 (0:00:01.003) 0:36:33.298 ********* 2026-03-24 05:25:59.080830 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080846 | orchestrator | 2026-03-24 05:25:59.080863 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:25:59.080881 | orchestrator | Tuesday 24 March 2026 05:25:53 +0000 (0:00:01.086) 0:36:34.384 ********* 2026-03-24 05:25:59.080896 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080913 | orchestrator | 2026-03-24 05:25:59.080929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:25:59.080945 | orchestrator | Tuesday 24 March 2026 05:25:54 +0000 (0:00:01.118) 0:36:35.503 ********* 2026-03-24 05:25:59.080978 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.080997 | orchestrator | 2026-03-24 05:25:59.081014 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:25:59.081103 | orchestrator | Tuesday 24 March 2026 05:25:55 +0000 (0:00:01.135) 0:36:36.638 ********* 2026-03-24 05:25:59.081125 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.081140 | orchestrator | 2026-03-24 05:25:59.081157 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:25:59.081204 | orchestrator | Tuesday 24 March 2026 05:25:56 +0000 (0:00:01.089) 0:36:37.728 ********* 2026-03-24 05:25:59.081222 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:25:59.081238 | orchestrator | 2026-03-24 05:25:59.081254 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:25:59.081264 | orchestrator | Tuesday 24 March 2026 05:25:57 +0000 (0:00:01.088) 0:36:38.817 ********* 2026-03-24 05:25:59.081273 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:25:59.081283 | orchestrator | 2026-03-24 05:25:59.081293 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:25:59.081330 | orchestrator | Tuesday 24 March 2026 05:25:59 +0000 (0:00:01.151) 0:36:39.968 ********* 2026-03-24 05:26:49.699972 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-24 05:26:49.700188 | orchestrator | 2026-03-24 05:26:49.700217 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:26:49.700230 | orchestrator | Tuesday 24 March 2026 05:26:00 +0000 (0:00:01.081) 0:36:41.050 ********* 2026-03-24 05:26:49.700241 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-24 05:26:49.700252 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-24 05:26:49.700262 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-24 05:26:49.700272 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-24 05:26:49.700282 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-24 05:26:49.700292 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-24 05:26:49.700301 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-24 05:26:49.700312 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:26:49.700322 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:26:49.700434 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:26:49.700453 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:26:49.700464 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:26:49.700473 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:26:49.700484 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:26:49.700494 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-24 05:26:49.700504 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-24 05:26:49.700515 | orchestrator | 2026-03-24 05:26:49.700526 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:26:49.700538 | orchestrator | Tuesday 24 March 2026 05:26:07 +0000 (0:00:06.987) 0:36:48.037 ********* 2026-03-24 05:26:49.700549 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-24 05:26:49.700559 | orchestrator | 2026-03-24 05:26:49.700571 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:26:49.700581 | orchestrator | Tuesday 24 March 2026 05:26:08 +0000 (0:00:01.472) 0:36:49.510 ********* 2026-03-24 05:26:49.700594 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:26:49.700606 | orchestrator | 2026-03-24 05:26:49.700617 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:26:49.700652 | orchestrator | Tuesday 24 March 2026 05:26:10 +0000 (0:00:01.529) 0:36:51.039 ********* 2026-03-24 05:26:49.700669 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:26:49.700686 | orchestrator | 2026-03-24 05:26:49.700702 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:26:49.700718 | orchestrator | Tuesday 24 March 2026 05:26:12 +0000 (0:00:02.025) 0:36:53.065 ********* 2026-03-24 05:26:49.700735 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.700752 | orchestrator | 2026-03-24 05:26:49.700769 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:26:49.700786 | orchestrator | Tuesday 24 March 2026 05:26:13 +0000 (0:00:01.134) 0:36:54.200 ********* 2026-03-24 05:26:49.700802 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.700818 | orchestrator | 2026-03-24 05:26:49.700833 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:26:49.700848 | orchestrator | Tuesday 24 March 2026 05:26:14 +0000 (0:00:01.149) 0:36:55.350 ********* 2026-03-24 05:26:49.700863 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.700879 | orchestrator | 2026-03-24 05:26:49.700896 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:26:49.700913 | orchestrator | Tuesday 24 March 2026 05:26:15 +0000 (0:00:01.124) 0:36:56.475 ********* 2026-03-24 05:26:49.700927 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.700943 | orchestrator | 2026-03-24 05:26:49.700959 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:26:49.700975 | orchestrator | Tuesday 24 March 2026 05:26:16 +0000 (0:00:01.118) 0:36:57.593 ********* 2026-03-24 05:26:49.700990 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701005 | orchestrator | 2026-03-24 05:26:49.701020 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:26:49.701035 | orchestrator | Tuesday 24 March 2026 05:26:17 +0000 (0:00:01.165) 0:36:58.759 ********* 2026-03-24 05:26:49.701051 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701102 | orchestrator | 2026-03-24 05:26:49.701120 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:26:49.701136 | orchestrator | Tuesday 24 March 2026 05:26:18 +0000 (0:00:01.101) 0:36:59.860 ********* 2026-03-24 05:26:49.701154 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701171 | orchestrator | 2026-03-24 05:26:49.701187 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:26:49.701203 | orchestrator | Tuesday 24 March 2026 05:26:20 +0000 (0:00:01.099) 0:37:00.959 ********* 2026-03-24 05:26:49.701218 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701237 | orchestrator | 2026-03-24 05:26:49.701254 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:26:49.701269 | orchestrator | Tuesday 24 March 2026 05:26:21 +0000 (0:00:01.153) 0:37:02.113 ********* 2026-03-24 05:26:49.701285 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701300 | orchestrator | 2026-03-24 05:26:49.701344 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:26:49.701363 | orchestrator | Tuesday 24 March 2026 05:26:22 +0000 (0:00:01.139) 0:37:03.253 ********* 2026-03-24 05:26:49.701379 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701396 | orchestrator | 2026-03-24 05:26:49.701408 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:26:49.701418 | orchestrator | Tuesday 24 March 2026 05:26:23 +0000 (0:00:01.113) 0:37:04.366 ********* 2026-03-24 05:26:49.701428 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:26:49.701437 | orchestrator | 2026-03-24 05:26:49.701447 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:26:49.701457 | orchestrator | Tuesday 24 March 2026 05:26:24 +0000 (0:00:01.197) 0:37:05.563 ********* 2026-03-24 05:26:49.701482 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:26:49.701492 | orchestrator | 2026-03-24 05:26:49.701506 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:26:49.701532 | orchestrator | Tuesday 24 March 2026 05:26:29 +0000 (0:00:04.644) 0:37:10.207 ********* 2026-03-24 05:26:49.701548 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:26:49.701563 | orchestrator | 2026-03-24 05:26:49.701578 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:26:49.701594 | orchestrator | Tuesday 24 March 2026 05:26:30 +0000 (0:00:01.230) 0:37:11.438 ********* 2026-03-24 05:26:49.701612 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-24 05:26:49.701631 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-24 05:26:49.701649 | orchestrator | 2026-03-24 05:26:49.701663 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:26:49.701678 | orchestrator | Tuesday 24 March 2026 05:26:38 +0000 (0:00:07.967) 0:37:19.406 ********* 2026-03-24 05:26:49.701692 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701707 | orchestrator | 2026-03-24 05:26:49.701722 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:26:49.701737 | orchestrator | Tuesday 24 March 2026 05:26:39 +0000 (0:00:01.128) 0:37:20.535 ********* 2026-03-24 05:26:49.701754 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701770 | orchestrator | 2026-03-24 05:26:49.701787 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:26:49.701804 | orchestrator | Tuesday 24 March 2026 05:26:40 +0000 (0:00:01.150) 0:37:21.685 ********* 2026-03-24 05:26:49.701815 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701824 | orchestrator | 2026-03-24 05:26:49.701834 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:26:49.701844 | orchestrator | Tuesday 24 March 2026 05:26:41 +0000 (0:00:01.185) 0:37:22.871 ********* 2026-03-24 05:26:49.701853 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701863 | orchestrator | 2026-03-24 05:26:49.701872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:26:49.701882 | orchestrator | Tuesday 24 March 2026 05:26:43 +0000 (0:00:01.143) 0:37:24.015 ********* 2026-03-24 05:26:49.701891 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.701901 | orchestrator | 2026-03-24 05:26:49.701910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:26:49.701920 | orchestrator | Tuesday 24 March 2026 05:26:44 +0000 (0:00:01.148) 0:37:25.163 ********* 2026-03-24 05:26:49.701929 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:26:49.701939 | orchestrator | 2026-03-24 05:26:49.701948 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:26:49.701958 | orchestrator | Tuesday 24 March 2026 05:26:45 +0000 (0:00:01.250) 0:37:26.413 ********* 2026-03-24 05:26:49.701967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:26:49.701977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:26:49.701987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:26:49.701996 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.702135 | orchestrator | 2026-03-24 05:26:49.702157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:26:49.702167 | orchestrator | Tuesday 24 March 2026 05:26:46 +0000 (0:00:01.405) 0:37:27.819 ********* 2026-03-24 05:26:49.702177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:26:49.702186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:26:49.702195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:26:49.702205 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:26:49.702214 | orchestrator | 2026-03-24 05:26:49.702224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:26:49.702233 | orchestrator | Tuesday 24 March 2026 05:26:48 +0000 (0:00:01.411) 0:37:29.231 ********* 2026-03-24 05:26:49.702243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:26:49.702253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:26:49.702276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:27:49.297669 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.297789 | orchestrator | 2026-03-24 05:27:49.297806 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:27:49.297820 | orchestrator | Tuesday 24 March 2026 05:26:49 +0000 (0:00:01.356) 0:37:30.588 ********* 2026-03-24 05:27:49.297832 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.297843 | orchestrator | 2026-03-24 05:27:49.297855 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:27:49.297866 | orchestrator | Tuesday 24 March 2026 05:26:50 +0000 (0:00:01.172) 0:37:31.760 ********* 2026-03-24 05:27:49.297877 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:27:49.297888 | orchestrator | 2026-03-24 05:27:49.297900 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:27:49.297911 | orchestrator | Tuesday 24 March 2026 05:26:52 +0000 (0:00:01.310) 0:37:33.071 ********* 2026-03-24 05:27:49.297922 | orchestrator | changed: [testbed-node-3] 2026-03-24 05:27:49.297933 | orchestrator | 2026-03-24 05:27:49.297960 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-24 05:27:49.297972 | orchestrator | Tuesday 24 March 2026 05:26:54 +0000 (0:00:02.139) 0:37:35.210 ********* 2026-03-24 05:27:49.297983 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.297994 | orchestrator | 2026-03-24 05:27:49.298011 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:27:49.298133 | orchestrator | Tuesday 24 March 2026 05:26:55 +0000 (0:00:01.146) 0:37:36.357 ********* 2026-03-24 05:27:49.298152 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:27:49.298171 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:27:49.298188 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:27:49.298206 | orchestrator | 2026-03-24 05:27:49.298223 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-24 05:27:49.298243 | orchestrator | Tuesday 24 March 2026 05:26:57 +0000 (0:00:01.610) 0:37:37.968 ********* 2026-03-24 05:27:49.298261 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-03-24 05:27:49.298282 | orchestrator | 2026-03-24 05:27:49.298302 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-24 05:27:49.298322 | orchestrator | Tuesday 24 March 2026 05:26:58 +0000 (0:00:01.434) 0:37:39.403 ********* 2026-03-24 05:27:49.298340 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.298355 | orchestrator | 2026-03-24 05:27:49.298374 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-24 05:27:49.298393 | orchestrator | Tuesday 24 March 2026 05:26:59 +0000 (0:00:01.116) 0:37:40.519 ********* 2026-03-24 05:27:49.298411 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.298428 | orchestrator | 2026-03-24 05:27:49.298481 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-24 05:27:49.298502 | orchestrator | Tuesday 24 March 2026 05:27:00 +0000 (0:00:01.103) 0:37:41.623 ********* 2026-03-24 05:27:49.298520 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.298535 | orchestrator | 2026-03-24 05:27:49.298547 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-24 05:27:49.298557 | orchestrator | Tuesday 24 March 2026 05:27:02 +0000 (0:00:01.431) 0:37:43.055 ********* 2026-03-24 05:27:49.298568 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.298579 | orchestrator | 2026-03-24 05:27:49.298593 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-24 05:27:49.298611 | orchestrator | Tuesday 24 March 2026 05:27:03 +0000 (0:00:01.127) 0:37:44.183 ********* 2026-03-24 05:27:49.298628 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-24 05:27:49.298645 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-24 05:27:49.298662 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-24 05:27:49.298680 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-24 05:27:49.298698 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-24 05:27:49.298715 | orchestrator | 2026-03-24 05:27:49.298732 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-24 05:27:49.298750 | orchestrator | Tuesday 24 March 2026 05:27:06 +0000 (0:00:03.081) 0:37:47.264 ********* 2026-03-24 05:27:49.298768 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.298784 | orchestrator | 2026-03-24 05:27:49.298803 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-24 05:27:49.298821 | orchestrator | Tuesday 24 March 2026 05:27:07 +0000 (0:00:01.111) 0:37:48.376 ********* 2026-03-24 05:27:49.298837 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-03-24 05:27:49.298854 | orchestrator | 2026-03-24 05:27:49.298871 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-24 05:27:49.298889 | orchestrator | Tuesday 24 March 2026 05:27:08 +0000 (0:00:01.515) 0:37:49.891 ********* 2026-03-24 05:27:49.298908 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-24 05:27:49.298925 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-24 05:27:49.298943 | orchestrator | 2026-03-24 05:27:49.298960 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-24 05:27:49.298977 | orchestrator | Tuesday 24 March 2026 05:27:10 +0000 (0:00:01.854) 0:37:51.746 ********* 2026-03-24 05:27:49.298992 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:27:49.299008 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 05:27:49.299024 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:27:49.299041 | orchestrator | 2026-03-24 05:27:49.299182 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:27:49.299210 | orchestrator | Tuesday 24 March 2026 05:27:14 +0000 (0:00:03.256) 0:37:55.003 ********* 2026-03-24 05:27:49.299228 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-24 05:27:49.299245 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 05:27:49.299262 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.299279 | orchestrator | 2026-03-24 05:27:49.299296 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-24 05:27:49.299314 | orchestrator | Tuesday 24 March 2026 05:27:16 +0000 (0:00:02.046) 0:37:57.049 ********* 2026-03-24 05:27:49.299331 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.299348 | orchestrator | 2026-03-24 05:27:49.299364 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-24 05:27:49.299382 | orchestrator | Tuesday 24 March 2026 05:27:17 +0000 (0:00:01.191) 0:37:58.241 ********* 2026-03-24 05:27:49.299422 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.299441 | orchestrator | 2026-03-24 05:27:49.299470 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-24 05:27:49.299489 | orchestrator | Tuesday 24 March 2026 05:27:18 +0000 (0:00:01.101) 0:37:59.342 ********* 2026-03-24 05:27:49.299508 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.299526 | orchestrator | 2026-03-24 05:27:49.299543 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-24 05:27:49.299560 | orchestrator | Tuesday 24 March 2026 05:27:19 +0000 (0:00:01.102) 0:38:00.445 ********* 2026-03-24 05:27:49.299576 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-03-24 05:27:49.299594 | orchestrator | 2026-03-24 05:27:49.299611 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-24 05:27:49.299629 | orchestrator | Tuesday 24 March 2026 05:27:21 +0000 (0:00:01.513) 0:38:01.958 ********* 2026-03-24 05:27:49.299646 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.299664 | orchestrator | 2026-03-24 05:27:49.299681 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-24 05:27:49.299698 | orchestrator | Tuesday 24 March 2026 05:27:22 +0000 (0:00:01.518) 0:38:03.477 ********* 2026-03-24 05:27:49.299716 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.299732 | orchestrator | 2026-03-24 05:27:49.299750 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-24 05:27:49.299768 | orchestrator | Tuesday 24 March 2026 05:27:26 +0000 (0:00:03.617) 0:38:07.094 ********* 2026-03-24 05:27:49.299786 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-03-24 05:27:49.299803 | orchestrator | 2026-03-24 05:27:49.299820 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-24 05:27:49.299838 | orchestrator | Tuesday 24 March 2026 05:27:27 +0000 (0:00:01.440) 0:38:08.534 ********* 2026-03-24 05:27:49.299856 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.299874 | orchestrator | 2026-03-24 05:27:49.299891 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-24 05:27:49.299907 | orchestrator | Tuesday 24 March 2026 05:27:29 +0000 (0:00:01.964) 0:38:10.499 ********* 2026-03-24 05:27:49.299924 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.299941 | orchestrator | 2026-03-24 05:27:49.299958 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-24 05:27:49.299976 | orchestrator | Tuesday 24 March 2026 05:27:31 +0000 (0:00:02.017) 0:38:12.516 ********* 2026-03-24 05:27:49.299994 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:27:49.300012 | orchestrator | 2026-03-24 05:27:49.300030 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-24 05:27:49.300048 | orchestrator | Tuesday 24 March 2026 05:27:33 +0000 (0:00:02.212) 0:38:14.729 ********* 2026-03-24 05:27:49.300066 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.300120 | orchestrator | 2026-03-24 05:27:49.300139 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-24 05:27:49.300157 | orchestrator | Tuesday 24 March 2026 05:27:34 +0000 (0:00:01.121) 0:38:15.850 ********* 2026-03-24 05:27:49.300176 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.300193 | orchestrator | 2026-03-24 05:27:49.300210 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-24 05:27:49.300226 | orchestrator | Tuesday 24 March 2026 05:27:36 +0000 (0:00:01.199) 0:38:17.049 ********* 2026-03-24 05:27:49.300244 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:27:49.300261 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-24 05:27:49.300279 | orchestrator | 2026-03-24 05:27:49.300295 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-24 05:27:49.300314 | orchestrator | Tuesday 24 March 2026 05:27:37 +0000 (0:00:01.845) 0:38:18.895 ********* 2026-03-24 05:27:49.300332 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:27:49.300351 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-24 05:27:49.300388 | orchestrator | 2026-03-24 05:27:49.300407 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-24 05:27:49.300424 | orchestrator | Tuesday 24 March 2026 05:27:40 +0000 (0:00:02.890) 0:38:21.786 ********* 2026-03-24 05:27:49.300440 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-24 05:27:49.300458 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-24 05:27:49.300476 | orchestrator | 2026-03-24 05:27:49.300493 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-24 05:27:49.300511 | orchestrator | Tuesday 24 March 2026 05:27:45 +0000 (0:00:04.773) 0:38:26.560 ********* 2026-03-24 05:27:49.300529 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.300546 | orchestrator | 2026-03-24 05:27:49.300563 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-24 05:27:49.300580 | orchestrator | Tuesday 24 March 2026 05:27:46 +0000 (0:00:01.221) 0:38:27.782 ********* 2026-03-24 05:27:49.300597 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.300616 | orchestrator | 2026-03-24 05:27:49.300632 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-24 05:27:49.300650 | orchestrator | Tuesday 24 March 2026 05:27:48 +0000 (0:00:01.188) 0:38:28.970 ********* 2026-03-24 05:27:49.300668 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:27:49.300685 | orchestrator | 2026-03-24 05:27:49.300727 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-24 05:28:32.349887 | orchestrator | Tuesday 24 March 2026 05:27:49 +0000 (0:00:01.207) 0:38:30.178 ********* 2026-03-24 05:28:32.350081 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350155 | orchestrator | 2026-03-24 05:28:32.350176 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-24 05:28:32.350195 | orchestrator | Tuesday 24 March 2026 05:27:50 +0000 (0:00:01.146) 0:38:31.325 ********* 2026-03-24 05:28:32.350212 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350229 | orchestrator | 2026-03-24 05:28:32.350245 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-24 05:28:32.350262 | orchestrator | Tuesday 24 March 2026 05:27:51 +0000 (0:00:01.160) 0:38:32.486 ********* 2026-03-24 05:28:32.350278 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-24 05:28:32.350315 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-24 05:28:32.350332 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-24 05:28:32.350349 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:28:32.350366 | orchestrator | 2026-03-24 05:28:32.350382 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 05:28:32.350400 | orchestrator | Tuesday 24 March 2026 05:28:02 +0000 (0:00:11.037) 0:38:43.524 ********* 2026-03-24 05:28:32.350417 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350434 | orchestrator | 2026-03-24 05:28:32.350451 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 05:28:32.350468 | orchestrator | Tuesday 24 March 2026 05:28:03 +0000 (0:00:01.110) 0:38:44.634 ********* 2026-03-24 05:28:32.350486 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350503 | orchestrator | 2026-03-24 05:28:32.350520 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 05:28:32.350537 | orchestrator | Tuesday 24 March 2026 05:28:04 +0000 (0:00:01.138) 0:38:45.773 ********* 2026-03-24 05:28:32.350553 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350571 | orchestrator | 2026-03-24 05:28:32.350589 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 05:28:32.350606 | orchestrator | Tuesday 24 March 2026 05:28:06 +0000 (0:00:01.154) 0:38:46.928 ********* 2026-03-24 05:28:32.350622 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350639 | orchestrator | 2026-03-24 05:28:32.350687 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 05:28:32.350706 | orchestrator | Tuesday 24 March 2026 05:28:07 +0000 (0:00:01.109) 0:38:48.037 ********* 2026-03-24 05:28:32.350722 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350739 | orchestrator | 2026-03-24 05:28:32.350755 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-24 05:28:32.350771 | orchestrator | Tuesday 24 March 2026 05:28:08 +0000 (0:00:01.127) 0:38:49.164 ********* 2026-03-24 05:28:32.350788 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350804 | orchestrator | 2026-03-24 05:28:32.350820 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 05:28:32.350836 | orchestrator | Tuesday 24 March 2026 05:28:09 +0000 (0:00:01.095) 0:38:50.261 ********* 2026-03-24 05:28:32.350881 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:28:32.350898 | orchestrator | 2026-03-24 05:28:32.350914 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-24 05:28:32.350946 | orchestrator | 2026-03-24 05:28:32.350963 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:28:32.350979 | orchestrator | Tuesday 24 March 2026 05:28:10 +0000 (0:00:00.930) 0:38:51.191 ********* 2026-03-24 05:28:32.350996 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-24 05:28:32.351012 | orchestrator | 2026-03-24 05:28:32.351028 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:28:32.351045 | orchestrator | Tuesday 24 March 2026 05:28:11 +0000 (0:00:01.080) 0:38:52.272 ********* 2026-03-24 05:28:32.351061 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351078 | orchestrator | 2026-03-24 05:28:32.351095 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:28:32.351135 | orchestrator | Tuesday 24 March 2026 05:28:12 +0000 (0:00:01.526) 0:38:53.799 ********* 2026-03-24 05:28:32.351151 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351167 | orchestrator | 2026-03-24 05:28:32.351183 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:28:32.351199 | orchestrator | Tuesday 24 March 2026 05:28:14 +0000 (0:00:01.113) 0:38:54.913 ********* 2026-03-24 05:28:32.351216 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351232 | orchestrator | 2026-03-24 05:28:32.351247 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:28:32.351263 | orchestrator | Tuesday 24 March 2026 05:28:15 +0000 (0:00:01.556) 0:38:56.469 ********* 2026-03-24 05:28:32.351280 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351296 | orchestrator | 2026-03-24 05:28:32.351313 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:28:32.351328 | orchestrator | Tuesday 24 March 2026 05:28:16 +0000 (0:00:01.159) 0:38:57.629 ********* 2026-03-24 05:28:32.351344 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351360 | orchestrator | 2026-03-24 05:28:32.351377 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:28:32.351393 | orchestrator | Tuesday 24 March 2026 05:28:17 +0000 (0:00:01.111) 0:38:58.740 ********* 2026-03-24 05:28:32.351410 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351426 | orchestrator | 2026-03-24 05:28:32.351443 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:28:32.351458 | orchestrator | Tuesday 24 March 2026 05:28:18 +0000 (0:00:01.120) 0:38:59.861 ********* 2026-03-24 05:28:32.351475 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:32.351492 | orchestrator | 2026-03-24 05:28:32.351509 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:28:32.351548 | orchestrator | Tuesday 24 March 2026 05:28:20 +0000 (0:00:01.124) 0:39:00.986 ********* 2026-03-24 05:28:32.351565 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351582 | orchestrator | 2026-03-24 05:28:32.351598 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:28:32.351615 | orchestrator | Tuesday 24 March 2026 05:28:21 +0000 (0:00:01.130) 0:39:02.116 ********* 2026-03-24 05:28:32.351660 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:28:32.351678 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:28:32.351694 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:28:32.351711 | orchestrator | 2026-03-24 05:28:32.351728 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:28:32.351753 | orchestrator | Tuesday 24 March 2026 05:28:23 +0000 (0:00:01.954) 0:39:04.071 ********* 2026-03-24 05:28:32.351770 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:32.351786 | orchestrator | 2026-03-24 05:28:32.351802 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:28:32.351819 | orchestrator | Tuesday 24 March 2026 05:28:24 +0000 (0:00:01.236) 0:39:05.308 ********* 2026-03-24 05:28:32.351835 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:28:32.351850 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:28:32.351867 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:28:32.351883 | orchestrator | 2026-03-24 05:28:32.351899 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:28:32.351916 | orchestrator | Tuesday 24 March 2026 05:28:27 +0000 (0:00:03.157) 0:39:08.465 ********* 2026-03-24 05:28:32.351931 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 05:28:32.351945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 05:28:32.351962 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 05:28:32.351978 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:32.351993 | orchestrator | 2026-03-24 05:28:32.352010 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:28:32.352027 | orchestrator | Tuesday 24 March 2026 05:28:29 +0000 (0:00:01.710) 0:39:10.176 ********* 2026-03-24 05:28:32.352046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:28:32.352066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:28:32.352084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:28:32.352126 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:32.352143 | orchestrator | 2026-03-24 05:28:32.352159 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:28:32.352176 | orchestrator | Tuesday 24 March 2026 05:28:31 +0000 (0:00:01.914) 0:39:12.091 ********* 2026-03-24 05:28:32.352195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:32.352216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:32.352250 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:32.352268 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:32.352285 | orchestrator | 2026-03-24 05:28:32.352301 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:28:32.352328 | orchestrator | Tuesday 24 March 2026 05:28:32 +0000 (0:00:01.142) 0:39:13.233 ********* 2026-03-24 05:28:50.653469 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:28:24.972141', 'end': '2026-03-24 05:28:25.018758', 'delta': '0:00:00.046617', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:28:50.653588 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:28:25.817476', 'end': '2026-03-24 05:28:25.884508', 'delta': '0:00:00.067032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:28:50.653605 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:28:26.365132', 'end': '2026-03-24 05:28:26.418327', 'delta': '0:00:00.053195', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:28:50.653619 | orchestrator | 2026-03-24 05:28:50.653632 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:28:50.653645 | orchestrator | Tuesday 24 March 2026 05:28:33 +0000 (0:00:01.169) 0:39:14.403 ********* 2026-03-24 05:28:50.653657 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:50.653669 | orchestrator | 2026-03-24 05:28:50.653680 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:28:50.653691 | orchestrator | Tuesday 24 March 2026 05:28:34 +0000 (0:00:01.260) 0:39:15.664 ********* 2026-03-24 05:28:50.653702 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:50.653714 | orchestrator | 2026-03-24 05:28:50.653725 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:28:50.653737 | orchestrator | Tuesday 24 March 2026 05:28:35 +0000 (0:00:01.236) 0:39:16.901 ********* 2026-03-24 05:28:50.653773 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:50.653785 | orchestrator | 2026-03-24 05:28:50.653796 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:28:50.653806 | orchestrator | Tuesday 24 March 2026 05:28:37 +0000 (0:00:01.133) 0:39:18.035 ********* 2026-03-24 05:28:50.653817 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:28:50.653828 | orchestrator | 2026-03-24 05:28:50.653839 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:28:50.653850 | orchestrator | Tuesday 24 March 2026 05:28:39 +0000 (0:00:01.914) 0:39:19.949 ********* 2026-03-24 05:28:50.653861 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:50.653871 | orchestrator | 2026-03-24 05:28:50.653882 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:28:50.653893 | orchestrator | Tuesday 24 March 2026 05:28:40 +0000 (0:00:01.146) 0:39:21.096 ********* 2026-03-24 05:28:50.653903 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:50.653914 | orchestrator | 2026-03-24 05:28:50.653925 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:28:50.653936 | orchestrator | Tuesday 24 March 2026 05:28:41 +0000 (0:00:01.161) 0:39:22.257 ********* 2026-03-24 05:28:50.653946 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:50.653957 | orchestrator | 2026-03-24 05:28:50.653968 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:28:50.653979 | orchestrator | Tuesday 24 March 2026 05:28:42 +0000 (0:00:01.189) 0:39:23.447 ********* 2026-03-24 05:28:50.653989 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:50.654000 | orchestrator | 2026-03-24 05:28:50.654010 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:28:50.654146 | orchestrator | Tuesday 24 March 2026 05:28:43 +0000 (0:00:01.088) 0:39:24.535 ********* 2026-03-24 05:28:50.654159 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:50.654171 | orchestrator | 2026-03-24 05:28:50.654201 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:28:50.654213 | orchestrator | Tuesday 24 March 2026 05:28:44 +0000 (0:00:01.097) 0:39:25.633 ********* 2026-03-24 05:28:50.654224 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:50.654235 | orchestrator | 2026-03-24 05:28:50.654246 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:28:50.654257 | orchestrator | Tuesday 24 March 2026 05:28:45 +0000 (0:00:01.194) 0:39:26.827 ********* 2026-03-24 05:28:50.654268 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:50.654279 | orchestrator | 2026-03-24 05:28:50.654290 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:28:50.654301 | orchestrator | Tuesday 24 March 2026 05:28:47 +0000 (0:00:01.101) 0:39:27.929 ********* 2026-03-24 05:28:50.654312 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:50.654323 | orchestrator | 2026-03-24 05:28:50.654342 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:28:50.654353 | orchestrator | Tuesday 24 March 2026 05:28:48 +0000 (0:00:01.149) 0:39:29.078 ********* 2026-03-24 05:28:50.654364 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:50.654375 | orchestrator | 2026-03-24 05:28:50.654386 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:28:50.654398 | orchestrator | Tuesday 24 March 2026 05:28:49 +0000 (0:00:01.108) 0:39:30.187 ********* 2026-03-24 05:28:50.654409 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:28:50.654420 | orchestrator | 2026-03-24 05:28:50.654431 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:28:50.654441 | orchestrator | Tuesday 24 March 2026 05:28:50 +0000 (0:00:01.135) 0:39:31.322 ********* 2026-03-24 05:28:50.654454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:50.654483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}})  2026-03-24 05:28:50.654497 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:28:50.654509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}})  2026-03-24 05:28:50.654530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:51.934795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:51.934918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:28:51.934938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:51.934973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:28:51.934986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:51.934998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}})  2026-03-24 05:28:51.935012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}})  2026-03-24 05:28:51.935043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:51.935067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:28:51.935088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:51.935100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:28:51.935162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:28:51.935175 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:28:51.935189 | orchestrator | 2026-03-24 05:28:51.935201 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:28:51.935213 | orchestrator | Tuesday 24 March 2026 05:28:51 +0000 (0:00:01.330) 0:39:32.652 ********* 2026-03-24 05:28:51.935235 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.135887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136024 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136041 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136221 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:28:53.136258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:29:11.138533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:29:11.138614 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:29:11.138622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:29:11.138662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:29:11.138668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:29:11.138673 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138678 | orchestrator | 2026-03-24 05:29:11.138683 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:29:11.138688 | orchestrator | Tuesday 24 March 2026 05:28:53 +0000 (0:00:01.378) 0:39:34.030 ********* 2026-03-24 05:29:11.138692 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:11.138696 | orchestrator | 2026-03-24 05:29:11.138699 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:29:11.138703 | orchestrator | Tuesday 24 March 2026 05:28:54 +0000 (0:00:01.475) 0:39:35.506 ********* 2026-03-24 05:29:11.138707 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:11.138711 | orchestrator | 2026-03-24 05:29:11.138714 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:29:11.138718 | orchestrator | Tuesday 24 March 2026 05:28:55 +0000 (0:00:01.131) 0:39:36.637 ********* 2026-03-24 05:29:11.138722 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:11.138725 | orchestrator | 2026-03-24 05:29:11.138729 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:29:11.138733 | orchestrator | Tuesday 24 March 2026 05:28:57 +0000 (0:00:01.466) 0:39:38.104 ********* 2026-03-24 05:29:11.138736 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138740 | orchestrator | 2026-03-24 05:29:11.138744 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:29:11.138748 | orchestrator | Tuesday 24 March 2026 05:28:58 +0000 (0:00:01.119) 0:39:39.224 ********* 2026-03-24 05:29:11.138751 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138755 | orchestrator | 2026-03-24 05:29:11.138759 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:29:11.138763 | orchestrator | Tuesday 24 March 2026 05:28:59 +0000 (0:00:01.340) 0:39:40.565 ********* 2026-03-24 05:29:11.138766 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138770 | orchestrator | 2026-03-24 05:29:11.138774 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:29:11.138778 | orchestrator | Tuesday 24 March 2026 05:29:00 +0000 (0:00:01.167) 0:39:41.732 ********* 2026-03-24 05:29:11.138782 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-24 05:29:11.138786 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-24 05:29:11.138790 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-24 05:29:11.138793 | orchestrator | 2026-03-24 05:29:11.138797 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:29:11.138823 | orchestrator | Tuesday 24 March 2026 05:29:02 +0000 (0:00:01.937) 0:39:43.670 ********* 2026-03-24 05:29:11.138827 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 05:29:11.138831 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 05:29:11.138834 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 05:29:11.138838 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138842 | orchestrator | 2026-03-24 05:29:11.138846 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:29:11.138849 | orchestrator | Tuesday 24 March 2026 05:29:03 +0000 (0:00:01.124) 0:39:44.794 ********* 2026-03-24 05:29:11.138853 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-24 05:29:11.138857 | orchestrator | 2026-03-24 05:29:11.138862 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:29:11.138867 | orchestrator | Tuesday 24 March 2026 05:29:05 +0000 (0:00:01.224) 0:39:46.019 ********* 2026-03-24 05:29:11.138870 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138874 | orchestrator | 2026-03-24 05:29:11.138878 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:29:11.138882 | orchestrator | Tuesday 24 March 2026 05:29:06 +0000 (0:00:01.155) 0:39:47.174 ********* 2026-03-24 05:29:11.138885 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138889 | orchestrator | 2026-03-24 05:29:11.138893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:29:11.138896 | orchestrator | Tuesday 24 March 2026 05:29:07 +0000 (0:00:01.125) 0:39:48.300 ********* 2026-03-24 05:29:11.138900 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:11.138904 | orchestrator | 2026-03-24 05:29:11.138911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:29:11.138914 | orchestrator | Tuesday 24 March 2026 05:29:08 +0000 (0:00:01.127) 0:39:49.428 ********* 2026-03-24 05:29:11.138918 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:11.138922 | orchestrator | 2026-03-24 05:29:11.138926 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:29:11.138929 | orchestrator | Tuesday 24 March 2026 05:29:09 +0000 (0:00:01.206) 0:39:50.634 ********* 2026-03-24 05:29:11.138936 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:29:50.422277 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:29:50.422405 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:29:50.422423 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.422436 | orchestrator | 2026-03-24 05:29:50.422450 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:29:50.422463 | orchestrator | Tuesday 24 March 2026 05:29:11 +0000 (0:00:01.394) 0:39:52.029 ********* 2026-03-24 05:29:50.422474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:29:50.422485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:29:50.422496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:29:50.422507 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.422526 | orchestrator | 2026-03-24 05:29:50.422549 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:29:50.422575 | orchestrator | Tuesday 24 March 2026 05:29:12 +0000 (0:00:01.369) 0:39:53.398 ********* 2026-03-24 05:29:50.422593 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:29:50.422611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:29:50.422630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:29:50.422649 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.422669 | orchestrator | 2026-03-24 05:29:50.422684 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:29:50.422724 | orchestrator | Tuesday 24 March 2026 05:29:13 +0000 (0:00:01.373) 0:39:54.772 ********* 2026-03-24 05:29:50.422741 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.422761 | orchestrator | 2026-03-24 05:29:50.422774 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:29:50.422787 | orchestrator | Tuesday 24 March 2026 05:29:15 +0000 (0:00:01.129) 0:39:55.902 ********* 2026-03-24 05:29:50.422800 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 05:29:50.422813 | orchestrator | 2026-03-24 05:29:50.422825 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:29:50.422838 | orchestrator | Tuesday 24 March 2026 05:29:16 +0000 (0:00:01.365) 0:39:57.267 ********* 2026-03-24 05:29:50.422851 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:29:50.422863 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:29:50.422876 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:29:50.422888 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:29:50.422902 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-24 05:29:50.422915 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:29:50.422928 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:29:50.422940 | orchestrator | 2026-03-24 05:29:50.422952 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:29:50.422965 | orchestrator | Tuesday 24 March 2026 05:29:18 +0000 (0:00:02.041) 0:39:59.308 ********* 2026-03-24 05:29:50.422977 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:29:50.422990 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:29:50.423003 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:29:50.423015 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:29:50.423025 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-24 05:29:50.423036 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:29:50.423047 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:29:50.423057 | orchestrator | 2026-03-24 05:29:50.423068 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-24 05:29:50.423079 | orchestrator | Tuesday 24 March 2026 05:29:20 +0000 (0:00:02.252) 0:40:01.561 ********* 2026-03-24 05:29:50.423090 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.423100 | orchestrator | 2026-03-24 05:29:50.423111 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-24 05:29:50.423122 | orchestrator | Tuesday 24 March 2026 05:29:21 +0000 (0:00:01.164) 0:40:02.725 ********* 2026-03-24 05:29:50.423200 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.423214 | orchestrator | 2026-03-24 05:29:50.423225 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-24 05:29:50.423236 | orchestrator | Tuesday 24 March 2026 05:29:22 +0000 (0:00:00.774) 0:40:03.500 ********* 2026-03-24 05:29:50.423247 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.423257 | orchestrator | 2026-03-24 05:29:50.423268 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-24 05:29:50.423279 | orchestrator | Tuesday 24 March 2026 05:29:23 +0000 (0:00:00.909) 0:40:04.410 ********* 2026-03-24 05:29:50.423290 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-24 05:29:50.423318 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-24 05:29:50.423338 | orchestrator | 2026-03-24 05:29:50.423356 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:29:50.423374 | orchestrator | Tuesday 24 March 2026 05:29:27 +0000 (0:00:03.886) 0:40:08.296 ********* 2026-03-24 05:29:50.423400 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-24 05:29:50.423419 | orchestrator | 2026-03-24 05:29:50.423436 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:29:50.423481 | orchestrator | Tuesday 24 March 2026 05:29:28 +0000 (0:00:01.086) 0:40:09.383 ********* 2026-03-24 05:29:50.423501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-24 05:29:50.423520 | orchestrator | 2026-03-24 05:29:50.423539 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:29:50.423557 | orchestrator | Tuesday 24 March 2026 05:29:29 +0000 (0:00:01.090) 0:40:10.473 ********* 2026-03-24 05:29:50.423575 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.423595 | orchestrator | 2026-03-24 05:29:50.423614 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:29:50.423634 | orchestrator | Tuesday 24 March 2026 05:29:30 +0000 (0:00:01.109) 0:40:11.583 ********* 2026-03-24 05:29:50.423646 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.423657 | orchestrator | 2026-03-24 05:29:50.423668 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:29:50.423680 | orchestrator | Tuesday 24 March 2026 05:29:32 +0000 (0:00:01.530) 0:40:13.114 ********* 2026-03-24 05:29:50.423691 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.423702 | orchestrator | 2026-03-24 05:29:50.423712 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:29:50.423723 | orchestrator | Tuesday 24 March 2026 05:29:33 +0000 (0:00:01.546) 0:40:14.660 ********* 2026-03-24 05:29:50.423734 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.423744 | orchestrator | 2026-03-24 05:29:50.423756 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:29:50.423766 | orchestrator | Tuesday 24 March 2026 05:29:35 +0000 (0:00:01.552) 0:40:16.213 ********* 2026-03-24 05:29:50.423777 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.423791 | orchestrator | 2026-03-24 05:29:50.423809 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:29:50.423835 | orchestrator | Tuesday 24 March 2026 05:29:36 +0000 (0:00:01.104) 0:40:17.317 ********* 2026-03-24 05:29:50.423854 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.423872 | orchestrator | 2026-03-24 05:29:50.423889 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:29:50.423906 | orchestrator | Tuesday 24 March 2026 05:29:37 +0000 (0:00:01.126) 0:40:18.443 ********* 2026-03-24 05:29:50.423924 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.423941 | orchestrator | 2026-03-24 05:29:50.423959 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:29:50.423976 | orchestrator | Tuesday 24 March 2026 05:29:38 +0000 (0:00:01.127) 0:40:19.571 ********* 2026-03-24 05:29:50.423991 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.424006 | orchestrator | 2026-03-24 05:29:50.424022 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:29:50.424037 | orchestrator | Tuesday 24 March 2026 05:29:40 +0000 (0:00:01.587) 0:40:21.158 ********* 2026-03-24 05:29:50.424053 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.424070 | orchestrator | 2026-03-24 05:29:50.424088 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:29:50.424106 | orchestrator | Tuesday 24 March 2026 05:29:41 +0000 (0:00:01.536) 0:40:22.695 ********* 2026-03-24 05:29:50.424125 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.424209 | orchestrator | 2026-03-24 05:29:50.424229 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:29:50.424247 | orchestrator | Tuesday 24 March 2026 05:29:42 +0000 (0:00:00.767) 0:40:23.462 ********* 2026-03-24 05:29:50.424263 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.424280 | orchestrator | 2026-03-24 05:29:50.424315 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:29:50.424335 | orchestrator | Tuesday 24 March 2026 05:29:43 +0000 (0:00:00.773) 0:40:24.235 ********* 2026-03-24 05:29:50.424354 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.424372 | orchestrator | 2026-03-24 05:29:50.424390 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:29:50.424404 | orchestrator | Tuesday 24 March 2026 05:29:44 +0000 (0:00:00.770) 0:40:25.005 ********* 2026-03-24 05:29:50.424415 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.424425 | orchestrator | 2026-03-24 05:29:50.424436 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:29:50.424447 | orchestrator | Tuesday 24 March 2026 05:29:44 +0000 (0:00:00.776) 0:40:25.782 ********* 2026-03-24 05:29:50.424457 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.424468 | orchestrator | 2026-03-24 05:29:50.424479 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:29:50.424489 | orchestrator | Tuesday 24 March 2026 05:29:45 +0000 (0:00:00.801) 0:40:26.584 ********* 2026-03-24 05:29:50.424500 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.424516 | orchestrator | 2026-03-24 05:29:50.424543 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:29:50.424565 | orchestrator | Tuesday 24 March 2026 05:29:46 +0000 (0:00:00.770) 0:40:27.355 ********* 2026-03-24 05:29:50.424583 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.424603 | orchestrator | 2026-03-24 05:29:50.424621 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:29:50.424641 | orchestrator | Tuesday 24 March 2026 05:29:47 +0000 (0:00:00.757) 0:40:28.112 ********* 2026-03-24 05:29:50.424653 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:29:50.424664 | orchestrator | 2026-03-24 05:29:50.424675 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:29:50.424685 | orchestrator | Tuesday 24 March 2026 05:29:47 +0000 (0:00:00.768) 0:40:28.881 ********* 2026-03-24 05:29:50.424706 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.424717 | orchestrator | 2026-03-24 05:29:50.424728 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:29:50.424739 | orchestrator | Tuesday 24 March 2026 05:29:48 +0000 (0:00:00.789) 0:40:29.671 ********* 2026-03-24 05:29:50.424749 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:29:50.424760 | orchestrator | 2026-03-24 05:29:50.424771 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:29:50.424782 | orchestrator | Tuesday 24 March 2026 05:29:49 +0000 (0:00:00.857) 0:40:30.528 ********* 2026-03-24 05:29:50.424806 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696362 | orchestrator | 2026-03-24 05:30:32.696464 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:30:32.696477 | orchestrator | Tuesday 24 March 2026 05:29:50 +0000 (0:00:00.784) 0:40:31.312 ********* 2026-03-24 05:30:32.696486 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696495 | orchestrator | 2026-03-24 05:30:32.696503 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:30:32.696511 | orchestrator | Tuesday 24 March 2026 05:29:51 +0000 (0:00:00.766) 0:40:32.079 ********* 2026-03-24 05:30:32.696519 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696526 | orchestrator | 2026-03-24 05:30:32.696534 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:30:32.696542 | orchestrator | Tuesday 24 March 2026 05:29:51 +0000 (0:00:00.760) 0:40:32.839 ********* 2026-03-24 05:30:32.696549 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696557 | orchestrator | 2026-03-24 05:30:32.696564 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:30:32.696572 | orchestrator | Tuesday 24 March 2026 05:29:52 +0000 (0:00:00.756) 0:40:33.595 ********* 2026-03-24 05:30:32.696580 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696587 | orchestrator | 2026-03-24 05:30:32.696615 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:30:32.696623 | orchestrator | Tuesday 24 March 2026 05:29:53 +0000 (0:00:00.821) 0:40:34.416 ********* 2026-03-24 05:30:32.696630 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696637 | orchestrator | 2026-03-24 05:30:32.696644 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:30:32.696652 | orchestrator | Tuesday 24 March 2026 05:29:54 +0000 (0:00:00.767) 0:40:35.184 ********* 2026-03-24 05:30:32.696659 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696666 | orchestrator | 2026-03-24 05:30:32.696673 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:30:32.696681 | orchestrator | Tuesday 24 March 2026 05:29:55 +0000 (0:00:00.763) 0:40:35.948 ********* 2026-03-24 05:30:32.696689 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696696 | orchestrator | 2026-03-24 05:30:32.696703 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:30:32.696721 | orchestrator | Tuesday 24 March 2026 05:29:55 +0000 (0:00:00.783) 0:40:36.732 ********* 2026-03-24 05:30:32.696728 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696735 | orchestrator | 2026-03-24 05:30:32.696743 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:30:32.696750 | orchestrator | Tuesday 24 March 2026 05:29:56 +0000 (0:00:00.746) 0:40:37.479 ********* 2026-03-24 05:30:32.696757 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696764 | orchestrator | 2026-03-24 05:30:32.696772 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:30:32.696779 | orchestrator | Tuesday 24 March 2026 05:29:57 +0000 (0:00:00.771) 0:40:38.250 ********* 2026-03-24 05:30:32.696786 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696793 | orchestrator | 2026-03-24 05:30:32.696801 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:30:32.696808 | orchestrator | Tuesday 24 March 2026 05:29:58 +0000 (0:00:00.759) 0:40:39.009 ********* 2026-03-24 05:30:32.696815 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696822 | orchestrator | 2026-03-24 05:30:32.696830 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:30:32.696837 | orchestrator | Tuesday 24 March 2026 05:29:58 +0000 (0:00:00.848) 0:40:39.859 ********* 2026-03-24 05:30:32.696844 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:30:32.696852 | orchestrator | 2026-03-24 05:30:32.696860 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:30:32.696867 | orchestrator | Tuesday 24 March 2026 05:30:00 +0000 (0:00:01.597) 0:40:41.456 ********* 2026-03-24 05:30:32.696874 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:30:32.696881 | orchestrator | 2026-03-24 05:30:32.696888 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:30:32.696896 | orchestrator | Tuesday 24 March 2026 05:30:02 +0000 (0:00:01.882) 0:40:43.339 ********* 2026-03-24 05:30:32.696903 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-24 05:30:32.696912 | orchestrator | 2026-03-24 05:30:32.696921 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:30:32.696929 | orchestrator | Tuesday 24 March 2026 05:30:03 +0000 (0:00:01.108) 0:40:44.448 ********* 2026-03-24 05:30:32.696938 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696946 | orchestrator | 2026-03-24 05:30:32.696955 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:30:32.696964 | orchestrator | Tuesday 24 March 2026 05:30:04 +0000 (0:00:01.115) 0:40:45.564 ********* 2026-03-24 05:30:32.696972 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.696980 | orchestrator | 2026-03-24 05:30:32.696988 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:30:32.696996 | orchestrator | Tuesday 24 March 2026 05:30:05 +0000 (0:00:01.164) 0:40:46.729 ********* 2026-03-24 05:30:32.697011 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:30:32.697019 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:30:32.697028 | orchestrator | 2026-03-24 05:30:32.697049 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:30:32.697057 | orchestrator | Tuesday 24 March 2026 05:30:07 +0000 (0:00:01.808) 0:40:48.537 ********* 2026-03-24 05:30:32.697066 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:30:32.697074 | orchestrator | 2026-03-24 05:30:32.697082 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:30:32.697091 | orchestrator | Tuesday 24 March 2026 05:30:09 +0000 (0:00:01.885) 0:40:50.422 ********* 2026-03-24 05:30:32.697099 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697107 | orchestrator | 2026-03-24 05:30:32.697129 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:30:32.697138 | orchestrator | Tuesday 24 March 2026 05:30:10 +0000 (0:00:01.151) 0:40:51.574 ********* 2026-03-24 05:30:32.697181 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697190 | orchestrator | 2026-03-24 05:30:32.697199 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:30:32.697207 | orchestrator | Tuesday 24 March 2026 05:30:11 +0000 (0:00:00.777) 0:40:52.352 ********* 2026-03-24 05:30:32.697215 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697223 | orchestrator | 2026-03-24 05:30:32.697231 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:30:32.697239 | orchestrator | Tuesday 24 March 2026 05:30:12 +0000 (0:00:00.764) 0:40:53.116 ********* 2026-03-24 05:30:32.697247 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-24 05:30:32.697256 | orchestrator | 2026-03-24 05:30:32.697264 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:30:32.697273 | orchestrator | Tuesday 24 March 2026 05:30:13 +0000 (0:00:01.208) 0:40:54.324 ********* 2026-03-24 05:30:32.697280 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:30:32.697287 | orchestrator | 2026-03-24 05:30:32.697295 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:30:32.697302 | orchestrator | Tuesday 24 March 2026 05:30:15 +0000 (0:00:01.712) 0:40:56.037 ********* 2026-03-24 05:30:32.697309 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:30:32.697317 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:30:32.697324 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:30:32.697331 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697338 | orchestrator | 2026-03-24 05:30:32.697346 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:30:32.697353 | orchestrator | Tuesday 24 March 2026 05:30:16 +0000 (0:00:01.116) 0:40:57.154 ********* 2026-03-24 05:30:32.697360 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697368 | orchestrator | 2026-03-24 05:30:32.697375 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:30:32.697382 | orchestrator | Tuesday 24 March 2026 05:30:17 +0000 (0:00:01.106) 0:40:58.260 ********* 2026-03-24 05:30:32.697390 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697397 | orchestrator | 2026-03-24 05:30:32.697404 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:30:32.697411 | orchestrator | Tuesday 24 March 2026 05:30:18 +0000 (0:00:01.200) 0:40:59.461 ********* 2026-03-24 05:30:32.697418 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697426 | orchestrator | 2026-03-24 05:30:32.697433 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:30:32.697440 | orchestrator | Tuesday 24 March 2026 05:30:19 +0000 (0:00:01.126) 0:41:00.587 ********* 2026-03-24 05:30:32.697447 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697460 | orchestrator | 2026-03-24 05:30:32.697468 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:30:32.697475 | orchestrator | Tuesday 24 March 2026 05:30:20 +0000 (0:00:01.157) 0:41:01.745 ********* 2026-03-24 05:30:32.697482 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697489 | orchestrator | 2026-03-24 05:30:32.697496 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:30:32.697503 | orchestrator | Tuesday 24 March 2026 05:30:21 +0000 (0:00:00.788) 0:41:02.533 ********* 2026-03-24 05:30:32.697511 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:30:32.697518 | orchestrator | 2026-03-24 05:30:32.697525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:30:32.697532 | orchestrator | Tuesday 24 March 2026 05:30:23 +0000 (0:00:02.242) 0:41:04.776 ********* 2026-03-24 05:30:32.697539 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:30:32.697547 | orchestrator | 2026-03-24 05:30:32.697554 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:30:32.697561 | orchestrator | Tuesday 24 March 2026 05:30:24 +0000 (0:00:00.833) 0:41:05.609 ********* 2026-03-24 05:30:32.697568 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-24 05:30:32.697575 | orchestrator | 2026-03-24 05:30:32.697583 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:30:32.697590 | orchestrator | Tuesday 24 March 2026 05:30:25 +0000 (0:00:01.124) 0:41:06.734 ********* 2026-03-24 05:30:32.697597 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697604 | orchestrator | 2026-03-24 05:30:32.697612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:30:32.697619 | orchestrator | Tuesday 24 March 2026 05:30:26 +0000 (0:00:01.144) 0:41:07.879 ********* 2026-03-24 05:30:32.697626 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697633 | orchestrator | 2026-03-24 05:30:32.697641 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:30:32.697648 | orchestrator | Tuesday 24 March 2026 05:30:28 +0000 (0:00:01.158) 0:41:09.037 ********* 2026-03-24 05:30:32.697655 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697662 | orchestrator | 2026-03-24 05:30:32.697669 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:30:32.697676 | orchestrator | Tuesday 24 March 2026 05:30:29 +0000 (0:00:01.134) 0:41:10.171 ********* 2026-03-24 05:30:32.697688 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697695 | orchestrator | 2026-03-24 05:30:32.697702 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:30:32.697710 | orchestrator | Tuesday 24 March 2026 05:30:30 +0000 (0:00:01.127) 0:41:11.299 ********* 2026-03-24 05:30:32.697717 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:30:32.697724 | orchestrator | 2026-03-24 05:30:32.697731 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:30:32.697739 | orchestrator | Tuesday 24 March 2026 05:30:31 +0000 (0:00:01.159) 0:41:12.459 ********* 2026-03-24 05:30:32.697751 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827095 | orchestrator | 2026-03-24 05:31:14.827260 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:31:14.827274 | orchestrator | Tuesday 24 March 2026 05:30:32 +0000 (0:00:01.125) 0:41:13.584 ********* 2026-03-24 05:31:14.827283 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827293 | orchestrator | 2026-03-24 05:31:14.827302 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:31:14.827310 | orchestrator | Tuesday 24 March 2026 05:30:33 +0000 (0:00:01.135) 0:41:14.720 ********* 2026-03-24 05:31:14.827318 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827327 | orchestrator | 2026-03-24 05:31:14.827335 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:31:14.827343 | orchestrator | Tuesday 24 March 2026 05:30:34 +0000 (0:00:01.158) 0:41:15.879 ********* 2026-03-24 05:31:14.827378 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:31:14.827388 | orchestrator | 2026-03-24 05:31:14.827397 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:31:14.827405 | orchestrator | Tuesday 24 March 2026 05:30:35 +0000 (0:00:00.814) 0:41:16.693 ********* 2026-03-24 05:31:14.827414 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-24 05:31:14.827423 | orchestrator | 2026-03-24 05:31:14.827431 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:31:14.827439 | orchestrator | Tuesday 24 March 2026 05:30:36 +0000 (0:00:01.107) 0:41:17.801 ********* 2026-03-24 05:31:14.827447 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-24 05:31:14.827457 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-24 05:31:14.827465 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-24 05:31:14.827473 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-24 05:31:14.827481 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-24 05:31:14.827489 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-24 05:31:14.827497 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-24 05:31:14.827506 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:31:14.827514 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:31:14.827522 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:31:14.827530 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:31:14.827538 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:31:14.827546 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:31:14.827554 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:31:14.827562 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-24 05:31:14.827570 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-24 05:31:14.827578 | orchestrator | 2026-03-24 05:31:14.827586 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:31:14.827594 | orchestrator | Tuesday 24 March 2026 05:30:43 +0000 (0:00:06.347) 0:41:24.148 ********* 2026-03-24 05:31:14.827602 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-24 05:31:14.827610 | orchestrator | 2026-03-24 05:31:14.827618 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:31:14.827626 | orchestrator | Tuesday 24 March 2026 05:30:44 +0000 (0:00:01.187) 0:41:25.336 ********* 2026-03-24 05:31:14.827634 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:31:14.827645 | orchestrator | 2026-03-24 05:31:14.827653 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:31:14.827661 | orchestrator | Tuesday 24 March 2026 05:30:46 +0000 (0:00:01.582) 0:41:26.918 ********* 2026-03-24 05:31:14.827669 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:31:14.827676 | orchestrator | 2026-03-24 05:31:14.827684 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:31:14.827692 | orchestrator | Tuesday 24 March 2026 05:30:47 +0000 (0:00:01.641) 0:41:28.560 ********* 2026-03-24 05:31:14.827700 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827708 | orchestrator | 2026-03-24 05:31:14.827716 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:31:14.827724 | orchestrator | Tuesday 24 March 2026 05:30:48 +0000 (0:00:00.771) 0:41:29.332 ********* 2026-03-24 05:31:14.827732 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827740 | orchestrator | 2026-03-24 05:31:14.827748 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:31:14.827763 | orchestrator | Tuesday 24 March 2026 05:30:49 +0000 (0:00:00.763) 0:41:30.096 ********* 2026-03-24 05:31:14.827771 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827779 | orchestrator | 2026-03-24 05:31:14.827787 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:31:14.827794 | orchestrator | Tuesday 24 March 2026 05:30:49 +0000 (0:00:00.753) 0:41:30.849 ********* 2026-03-24 05:31:14.827818 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827826 | orchestrator | 2026-03-24 05:31:14.827834 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:31:14.827842 | orchestrator | Tuesday 24 March 2026 05:30:50 +0000 (0:00:00.810) 0:41:31.659 ********* 2026-03-24 05:31:14.827851 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827864 | orchestrator | 2026-03-24 05:31:14.827877 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:31:14.827890 | orchestrator | Tuesday 24 March 2026 05:30:51 +0000 (0:00:00.769) 0:41:32.429 ********* 2026-03-24 05:31:14.827923 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827936 | orchestrator | 2026-03-24 05:31:14.827948 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:31:14.827959 | orchestrator | Tuesday 24 March 2026 05:30:52 +0000 (0:00:00.759) 0:41:33.189 ********* 2026-03-24 05:31:14.827970 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.827981 | orchestrator | 2026-03-24 05:31:14.827992 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:31:14.828004 | orchestrator | Tuesday 24 March 2026 05:30:53 +0000 (0:00:00.773) 0:41:33.962 ********* 2026-03-24 05:31:14.828017 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828029 | orchestrator | 2026-03-24 05:31:14.828042 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:31:14.828054 | orchestrator | Tuesday 24 March 2026 05:30:53 +0000 (0:00:00.772) 0:41:34.735 ********* 2026-03-24 05:31:14.828068 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828080 | orchestrator | 2026-03-24 05:31:14.828092 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:31:14.828106 | orchestrator | Tuesday 24 March 2026 05:30:54 +0000 (0:00:00.753) 0:41:35.489 ********* 2026-03-24 05:31:14.828118 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828132 | orchestrator | 2026-03-24 05:31:14.828145 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:31:14.828157 | orchestrator | Tuesday 24 March 2026 05:30:55 +0000 (0:00:00.790) 0:41:36.280 ********* 2026-03-24 05:31:14.828242 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:31:14.828255 | orchestrator | 2026-03-24 05:31:14.828270 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:31:14.828284 | orchestrator | Tuesday 24 March 2026 05:30:56 +0000 (0:00:00.852) 0:41:37.133 ********* 2026-03-24 05:31:14.828298 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:31:14.828310 | orchestrator | 2026-03-24 05:31:14.828323 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:31:14.828336 | orchestrator | Tuesday 24 March 2026 05:31:00 +0000 (0:00:04.311) 0:41:41.444 ********* 2026-03-24 05:31:14.828350 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:31:14.828363 | orchestrator | 2026-03-24 05:31:14.828377 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:31:14.828385 | orchestrator | Tuesday 24 March 2026 05:31:01 +0000 (0:00:00.824) 0:41:42.269 ********* 2026-03-24 05:31:14.828396 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-24 05:31:14.828418 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-24 05:31:14.828428 | orchestrator | 2026-03-24 05:31:14.828436 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:31:14.828444 | orchestrator | Tuesday 24 March 2026 05:31:08 +0000 (0:00:07.537) 0:41:49.806 ********* 2026-03-24 05:31:14.828451 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828460 | orchestrator | 2026-03-24 05:31:14.828473 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:31:14.828485 | orchestrator | Tuesday 24 March 2026 05:31:09 +0000 (0:00:00.795) 0:41:50.602 ********* 2026-03-24 05:31:14.828497 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828510 | orchestrator | 2026-03-24 05:31:14.828523 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:31:14.828548 | orchestrator | Tuesday 24 March 2026 05:31:10 +0000 (0:00:00.778) 0:41:51.381 ********* 2026-03-24 05:31:14.828563 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828573 | orchestrator | 2026-03-24 05:31:14.828581 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:31:14.828589 | orchestrator | Tuesday 24 March 2026 05:31:11 +0000 (0:00:00.779) 0:41:52.161 ********* 2026-03-24 05:31:14.828596 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828604 | orchestrator | 2026-03-24 05:31:14.828612 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:31:14.828619 | orchestrator | Tuesday 24 March 2026 05:31:12 +0000 (0:00:00.800) 0:41:52.962 ********* 2026-03-24 05:31:14.828631 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:31:14.828644 | orchestrator | 2026-03-24 05:31:14.828658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:31:14.828680 | orchestrator | Tuesday 24 March 2026 05:31:12 +0000 (0:00:00.807) 0:41:53.770 ********* 2026-03-24 05:31:14.828694 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:31:14.828703 | orchestrator | 2026-03-24 05:31:14.828711 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:31:14.828719 | orchestrator | Tuesday 24 March 2026 05:31:13 +0000 (0:00:00.884) 0:41:54.655 ********* 2026-03-24 05:31:14.828726 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:31:14.828735 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:31:14.828755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:32:04.393942 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394090 | orchestrator | 2026-03-24 05:32:04.394104 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:32:04.394114 | orchestrator | Tuesday 24 March 2026 05:31:14 +0000 (0:00:01.060) 0:41:55.715 ********* 2026-03-24 05:32:04.394121 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:32:04.394129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:32:04.394136 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:32:04.394143 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394151 | orchestrator | 2026-03-24 05:32:04.394158 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:32:04.394164 | orchestrator | Tuesday 24 March 2026 05:31:16 +0000 (0:00:01.406) 0:41:57.122 ********* 2026-03-24 05:32:04.394171 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:32:04.394218 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:32:04.394251 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:32:04.394259 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394265 | orchestrator | 2026-03-24 05:32:04.394272 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:32:04.394278 | orchestrator | Tuesday 24 March 2026 05:31:17 +0000 (0:00:01.367) 0:41:58.489 ********* 2026-03-24 05:32:04.394284 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.394292 | orchestrator | 2026-03-24 05:32:04.394298 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:32:04.394305 | orchestrator | Tuesday 24 March 2026 05:31:18 +0000 (0:00:00.898) 0:41:59.388 ********* 2026-03-24 05:32:04.394312 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 05:32:04.394318 | orchestrator | 2026-03-24 05:32:04.394325 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:32:04.394332 | orchestrator | Tuesday 24 March 2026 05:31:19 +0000 (0:00:00.985) 0:42:00.373 ********* 2026-03-24 05:32:04.394339 | orchestrator | changed: [testbed-node-4] 2026-03-24 05:32:04.394346 | orchestrator | 2026-03-24 05:32:04.394354 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-24 05:32:04.394361 | orchestrator | Tuesday 24 March 2026 05:31:20 +0000 (0:00:01.450) 0:42:01.823 ********* 2026-03-24 05:32:04.394368 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.394375 | orchestrator | 2026-03-24 05:32:04.394382 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:32:04.394389 | orchestrator | Tuesday 24 March 2026 05:31:21 +0000 (0:00:00.796) 0:42:02.620 ********* 2026-03-24 05:32:04.394397 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:32:04.394405 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:32:04.394412 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:32:04.394418 | orchestrator | 2026-03-24 05:32:04.394425 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-24 05:32:04.394432 | orchestrator | Tuesday 24 March 2026 05:31:22 +0000 (0:00:01.256) 0:42:03.877 ********* 2026-03-24 05:32:04.394440 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-03-24 05:32:04.394446 | orchestrator | 2026-03-24 05:32:04.394453 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-24 05:32:04.394460 | orchestrator | Tuesday 24 March 2026 05:31:24 +0000 (0:00:01.134) 0:42:05.012 ********* 2026-03-24 05:32:04.394467 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394473 | orchestrator | 2026-03-24 05:32:04.394480 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-24 05:32:04.394487 | orchestrator | Tuesday 24 March 2026 05:31:25 +0000 (0:00:01.132) 0:42:06.144 ********* 2026-03-24 05:32:04.394494 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394501 | orchestrator | 2026-03-24 05:32:04.394509 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-24 05:32:04.394517 | orchestrator | Tuesday 24 March 2026 05:31:26 +0000 (0:00:01.150) 0:42:07.295 ********* 2026-03-24 05:32:04.394524 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.394531 | orchestrator | 2026-03-24 05:32:04.394538 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-24 05:32:04.394545 | orchestrator | Tuesday 24 March 2026 05:31:27 +0000 (0:00:01.494) 0:42:08.789 ********* 2026-03-24 05:32:04.394552 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.394559 | orchestrator | 2026-03-24 05:32:04.394566 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-24 05:32:04.394573 | orchestrator | Tuesday 24 March 2026 05:31:29 +0000 (0:00:01.170) 0:42:09.960 ********* 2026-03-24 05:32:04.394581 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-24 05:32:04.394589 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-24 05:32:04.394605 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-24 05:32:04.394612 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-24 05:32:04.394632 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-24 05:32:04.394639 | orchestrator | 2026-03-24 05:32:04.394646 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-24 05:32:04.394653 | orchestrator | Tuesday 24 March 2026 05:31:32 +0000 (0:00:03.597) 0:42:13.557 ********* 2026-03-24 05:32:04.394660 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394667 | orchestrator | 2026-03-24 05:32:04.394674 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-24 05:32:04.394681 | orchestrator | Tuesday 24 March 2026 05:31:33 +0000 (0:00:00.792) 0:42:14.349 ********* 2026-03-24 05:32:04.394705 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-03-24 05:32:04.394712 | orchestrator | 2026-03-24 05:32:04.394720 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-24 05:32:04.394727 | orchestrator | Tuesday 24 March 2026 05:31:34 +0000 (0:00:01.112) 0:42:15.461 ********* 2026-03-24 05:32:04.394734 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-24 05:32:04.394741 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-24 05:32:04.394748 | orchestrator | 2026-03-24 05:32:04.394755 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-24 05:32:04.394762 | orchestrator | Tuesday 24 March 2026 05:31:36 +0000 (0:00:01.846) 0:42:17.308 ********* 2026-03-24 05:32:04.394769 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:32:04.394776 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 05:32:04.394783 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:32:04.394790 | orchestrator | 2026-03-24 05:32:04.394798 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:32:04.394805 | orchestrator | Tuesday 24 March 2026 05:31:39 +0000 (0:00:03.221) 0:42:20.530 ********* 2026-03-24 05:32:04.394812 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-24 05:32:04.394819 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 05:32:04.394826 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.394832 | orchestrator | 2026-03-24 05:32:04.394840 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-24 05:32:04.394846 | orchestrator | Tuesday 24 March 2026 05:31:41 +0000 (0:00:01.653) 0:42:22.184 ********* 2026-03-24 05:32:04.394853 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394860 | orchestrator | 2026-03-24 05:32:04.394867 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-24 05:32:04.394874 | orchestrator | Tuesday 24 March 2026 05:31:42 +0000 (0:00:00.921) 0:42:23.106 ********* 2026-03-24 05:32:04.394880 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394887 | orchestrator | 2026-03-24 05:32:04.394894 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-24 05:32:04.394901 | orchestrator | Tuesday 24 March 2026 05:31:42 +0000 (0:00:00.768) 0:42:23.874 ********* 2026-03-24 05:32:04.394908 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.394915 | orchestrator | 2026-03-24 05:32:04.394921 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-24 05:32:04.394928 | orchestrator | Tuesday 24 March 2026 05:31:43 +0000 (0:00:00.778) 0:42:24.653 ********* 2026-03-24 05:32:04.394936 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-03-24 05:32:04.394943 | orchestrator | 2026-03-24 05:32:04.394949 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-24 05:32:04.394956 | orchestrator | Tuesday 24 March 2026 05:31:44 +0000 (0:00:01.103) 0:42:25.757 ********* 2026-03-24 05:32:04.394968 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.394975 | orchestrator | 2026-03-24 05:32:04.394982 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-24 05:32:04.394989 | orchestrator | Tuesday 24 March 2026 05:31:46 +0000 (0:00:01.488) 0:42:27.246 ********* 2026-03-24 05:32:04.394996 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.395003 | orchestrator | 2026-03-24 05:32:04.395009 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-24 05:32:04.395016 | orchestrator | Tuesday 24 March 2026 05:31:49 +0000 (0:00:03.508) 0:42:30.755 ********* 2026-03-24 05:32:04.395022 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-03-24 05:32:04.395029 | orchestrator | 2026-03-24 05:32:04.395035 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-24 05:32:04.395043 | orchestrator | Tuesday 24 March 2026 05:31:51 +0000 (0:00:01.203) 0:42:31.959 ********* 2026-03-24 05:32:04.395050 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.395057 | orchestrator | 2026-03-24 05:32:04.395064 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-24 05:32:04.395071 | orchestrator | Tuesday 24 March 2026 05:31:53 +0000 (0:00:01.990) 0:42:33.949 ********* 2026-03-24 05:32:04.395077 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.395084 | orchestrator | 2026-03-24 05:32:04.395091 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-24 05:32:04.395098 | orchestrator | Tuesday 24 March 2026 05:31:55 +0000 (0:00:01.970) 0:42:35.919 ********* 2026-03-24 05:32:04.395105 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:32:04.395111 | orchestrator | 2026-03-24 05:32:04.395119 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-24 05:32:04.395126 | orchestrator | Tuesday 24 March 2026 05:31:57 +0000 (0:00:02.302) 0:42:38.222 ********* 2026-03-24 05:32:04.395133 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.395140 | orchestrator | 2026-03-24 05:32:04.395146 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-24 05:32:04.395153 | orchestrator | Tuesday 24 March 2026 05:31:58 +0000 (0:00:01.121) 0:42:39.343 ********* 2026-03-24 05:32:04.395160 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:04.395167 | orchestrator | 2026-03-24 05:32:04.395174 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-24 05:32:04.395196 | orchestrator | Tuesday 24 March 2026 05:31:59 +0000 (0:00:01.125) 0:42:40.469 ********* 2026-03-24 05:32:04.395207 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-24 05:32:04.395215 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-24 05:32:04.395221 | orchestrator | 2026-03-24 05:32:04.395228 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-24 05:32:04.395235 | orchestrator | Tuesday 24 March 2026 05:32:01 +0000 (0:00:01.825) 0:42:42.294 ********* 2026-03-24 05:32:04.395242 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-24 05:32:04.395249 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-24 05:32:04.395256 | orchestrator | 2026-03-24 05:32:04.395263 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-24 05:32:04.395274 | orchestrator | Tuesday 24 March 2026 05:32:04 +0000 (0:00:02.988) 0:42:45.283 ********* 2026-03-24 05:32:54.360666 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-24 05:32:54.360804 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-24 05:32:54.360834 | orchestrator | 2026-03-24 05:32:54.360855 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-24 05:32:54.360876 | orchestrator | Tuesday 24 March 2026 05:32:08 +0000 (0:00:04.438) 0:42:49.721 ********* 2026-03-24 05:32:54.360895 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.360915 | orchestrator | 2026-03-24 05:32:54.360935 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-24 05:32:54.360954 | orchestrator | Tuesday 24 March 2026 05:32:09 +0000 (0:00:00.866) 0:42:50.587 ********* 2026-03-24 05:32:54.361011 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361031 | orchestrator | 2026-03-24 05:32:54.361052 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-24 05:32:54.361072 | orchestrator | Tuesday 24 March 2026 05:32:10 +0000 (0:00:00.881) 0:42:51.469 ********* 2026-03-24 05:32:54.361089 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361100 | orchestrator | 2026-03-24 05:32:54.361111 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-24 05:32:54.361122 | orchestrator | Tuesday 24 March 2026 05:32:11 +0000 (0:00:00.924) 0:42:52.393 ********* 2026-03-24 05:32:54.361133 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361143 | orchestrator | 2026-03-24 05:32:54.361156 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-24 05:32:54.361169 | orchestrator | Tuesday 24 March 2026 05:32:12 +0000 (0:00:00.833) 0:42:53.227 ********* 2026-03-24 05:32:54.361182 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361225 | orchestrator | 2026-03-24 05:32:54.361246 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-24 05:32:54.361263 | orchestrator | Tuesday 24 March 2026 05:32:13 +0000 (0:00:00.772) 0:42:53.999 ********* 2026-03-24 05:32:54.361289 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-24 05:32:54.361313 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-24 05:32:54.361331 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-24 05:32:54.361349 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-24 05:32:54.361366 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:32:54.361382 | orchestrator | 2026-03-24 05:32:54.361398 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 05:32:54.361416 | orchestrator | Tuesday 24 March 2026 05:32:26 +0000 (0:00:13.836) 0:43:07.836 ********* 2026-03-24 05:32:54.361435 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361454 | orchestrator | 2026-03-24 05:32:54.361473 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 05:32:54.361492 | orchestrator | Tuesday 24 March 2026 05:32:27 +0000 (0:00:00.787) 0:43:08.624 ********* 2026-03-24 05:32:54.361510 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361528 | orchestrator | 2026-03-24 05:32:54.361547 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 05:32:54.361565 | orchestrator | Tuesday 24 March 2026 05:32:28 +0000 (0:00:00.762) 0:43:09.386 ********* 2026-03-24 05:32:54.361583 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361600 | orchestrator | 2026-03-24 05:32:54.361617 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 05:32:54.361634 | orchestrator | Tuesday 24 March 2026 05:32:29 +0000 (0:00:00.758) 0:43:10.145 ********* 2026-03-24 05:32:54.361653 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361672 | orchestrator | 2026-03-24 05:32:54.361691 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 05:32:54.361710 | orchestrator | Tuesday 24 March 2026 05:32:30 +0000 (0:00:00.783) 0:43:10.928 ********* 2026-03-24 05:32:54.361729 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361741 | orchestrator | 2026-03-24 05:32:54.361758 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-24 05:32:54.361775 | orchestrator | Tuesday 24 March 2026 05:32:30 +0000 (0:00:00.834) 0:43:11.763 ********* 2026-03-24 05:32:54.361794 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361812 | orchestrator | 2026-03-24 05:32:54.361830 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 05:32:54.361846 | orchestrator | Tuesday 24 March 2026 05:32:31 +0000 (0:00:00.765) 0:43:12.528 ********* 2026-03-24 05:32:54.361863 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:32:54.361917 | orchestrator | 2026-03-24 05:32:54.361937 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-24 05:32:54.361956 | orchestrator | 2026-03-24 05:32:54.361973 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:32:54.361992 | orchestrator | Tuesday 24 March 2026 05:32:32 +0000 (0:00:00.936) 0:43:13.464 ********* 2026-03-24 05:32:54.362008 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-24 05:32:54.362109 | orchestrator | 2026-03-24 05:32:54.362129 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:32:54.362166 | orchestrator | Tuesday 24 March 2026 05:32:33 +0000 (0:00:01.259) 0:43:14.723 ********* 2026-03-24 05:32:54.362186 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362225 | orchestrator | 2026-03-24 05:32:54.362237 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:32:54.362248 | orchestrator | Tuesday 24 March 2026 05:32:35 +0000 (0:00:01.435) 0:43:16.159 ********* 2026-03-24 05:32:54.362259 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362270 | orchestrator | 2026-03-24 05:32:54.362281 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:32:54.362292 | orchestrator | Tuesday 24 March 2026 05:32:36 +0000 (0:00:01.142) 0:43:17.301 ********* 2026-03-24 05:32:54.362325 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362337 | orchestrator | 2026-03-24 05:32:54.362349 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:32:54.362359 | orchestrator | Tuesday 24 March 2026 05:32:37 +0000 (0:00:01.437) 0:43:18.738 ********* 2026-03-24 05:32:54.362370 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362381 | orchestrator | 2026-03-24 05:32:54.362395 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:32:54.362414 | orchestrator | Tuesday 24 March 2026 05:32:38 +0000 (0:00:01.115) 0:43:19.854 ********* 2026-03-24 05:32:54.362432 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362449 | orchestrator | 2026-03-24 05:32:54.362467 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:32:54.362485 | orchestrator | Tuesday 24 March 2026 05:32:40 +0000 (0:00:01.136) 0:43:20.991 ********* 2026-03-24 05:32:54.362504 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362523 | orchestrator | 2026-03-24 05:32:54.362542 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:32:54.362561 | orchestrator | Tuesday 24 March 2026 05:32:41 +0000 (0:00:01.121) 0:43:22.112 ********* 2026-03-24 05:32:54.362580 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:32:54.362599 | orchestrator | 2026-03-24 05:32:54.362613 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:32:54.362624 | orchestrator | Tuesday 24 March 2026 05:32:42 +0000 (0:00:01.114) 0:43:23.227 ********* 2026-03-24 05:32:54.362635 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362645 | orchestrator | 2026-03-24 05:32:54.362656 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:32:54.362667 | orchestrator | Tuesday 24 March 2026 05:32:43 +0000 (0:00:01.099) 0:43:24.327 ********* 2026-03-24 05:32:54.362678 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:32:54.362689 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:32:54.362699 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:32:54.362710 | orchestrator | 2026-03-24 05:32:54.362721 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:32:54.362732 | orchestrator | Tuesday 24 March 2026 05:32:45 +0000 (0:00:01.953) 0:43:26.281 ********* 2026-03-24 05:32:54.362742 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:32:54.362753 | orchestrator | 2026-03-24 05:32:54.362764 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:32:54.362786 | orchestrator | Tuesday 24 March 2026 05:32:46 +0000 (0:00:01.211) 0:43:27.492 ********* 2026-03-24 05:32:54.362797 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:32:54.362808 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:32:54.362819 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:32:54.362829 | orchestrator | 2026-03-24 05:32:54.362840 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:32:54.362850 | orchestrator | Tuesday 24 March 2026 05:32:49 +0000 (0:00:03.249) 0:43:30.741 ********* 2026-03-24 05:32:54.362861 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 05:32:54.362872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 05:32:54.362883 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 05:32:54.362894 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:32:54.362904 | orchestrator | 2026-03-24 05:32:54.362915 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:32:54.362926 | orchestrator | Tuesday 24 March 2026 05:32:51 +0000 (0:00:01.746) 0:43:32.488 ********* 2026-03-24 05:32:54.362939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:32:54.362953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:32:54.362964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:32:54.362976 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:32:54.362986 | orchestrator | 2026-03-24 05:32:54.362997 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:32:54.363008 | orchestrator | Tuesday 24 March 2026 05:32:53 +0000 (0:00:01.592) 0:43:34.081 ********* 2026-03-24 05:32:54.363028 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:32:54.363054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:12.487841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:12.487977 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.488007 | orchestrator | 2026-03-24 05:33:12.488029 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:33:12.488050 | orchestrator | Tuesday 24 March 2026 05:32:54 +0000 (0:00:01.167) 0:43:35.248 ********* 2026-03-24 05:33:12.488110 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:32:47.408793', 'end': '2026-03-24 05:32:47.458541', 'delta': '0:00:00.049748', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:33:12.488136 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:32:47.933101', 'end': '2026-03-24 05:32:47.991043', 'delta': '0:00:00.057942', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:33:12.488158 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:32:48.604757', 'end': '2026-03-24 05:32:48.657191', 'delta': '0:00:00.052434', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:33:12.488176 | orchestrator | 2026-03-24 05:33:12.488196 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:33:12.488305 | orchestrator | Tuesday 24 March 2026 05:32:55 +0000 (0:00:01.193) 0:43:36.442 ********* 2026-03-24 05:33:12.488327 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:12.488347 | orchestrator | 2026-03-24 05:33:12.488364 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:33:12.488377 | orchestrator | Tuesday 24 March 2026 05:32:56 +0000 (0:00:01.198) 0:43:37.641 ********* 2026-03-24 05:33:12.488395 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.488413 | orchestrator | 2026-03-24 05:33:12.488453 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:33:12.488477 | orchestrator | Tuesday 24 March 2026 05:32:57 +0000 (0:00:01.209) 0:43:38.850 ********* 2026-03-24 05:33:12.488497 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:12.488516 | orchestrator | 2026-03-24 05:33:12.488533 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:33:12.488545 | orchestrator | Tuesday 24 March 2026 05:32:59 +0000 (0:00:01.125) 0:43:39.976 ********* 2026-03-24 05:33:12.488558 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:33:12.488570 | orchestrator | 2026-03-24 05:33:12.488583 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:33:12.488595 | orchestrator | Tuesday 24 March 2026 05:33:00 +0000 (0:00:01.913) 0:43:41.890 ********* 2026-03-24 05:33:12.488607 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:12.488619 | orchestrator | 2026-03-24 05:33:12.488630 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:33:12.488654 | orchestrator | Tuesday 24 March 2026 05:33:02 +0000 (0:00:01.128) 0:43:43.018 ********* 2026-03-24 05:33:12.488687 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.488699 | orchestrator | 2026-03-24 05:33:12.488709 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:33:12.488720 | orchestrator | Tuesday 24 March 2026 05:33:03 +0000 (0:00:01.102) 0:43:44.121 ********* 2026-03-24 05:33:12.488730 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.488741 | orchestrator | 2026-03-24 05:33:12.488752 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:33:12.488762 | orchestrator | Tuesday 24 March 2026 05:33:04 +0000 (0:00:01.198) 0:43:45.320 ********* 2026-03-24 05:33:12.488773 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.488784 | orchestrator | 2026-03-24 05:33:12.488794 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:33:12.488805 | orchestrator | Tuesday 24 March 2026 05:33:05 +0000 (0:00:01.169) 0:43:46.489 ********* 2026-03-24 05:33:12.488815 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.488826 | orchestrator | 2026-03-24 05:33:12.488836 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:33:12.488847 | orchestrator | Tuesday 24 March 2026 05:33:06 +0000 (0:00:01.084) 0:43:47.574 ********* 2026-03-24 05:33:12.488858 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:12.488868 | orchestrator | 2026-03-24 05:33:12.488884 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:33:12.488903 | orchestrator | Tuesday 24 March 2026 05:33:07 +0000 (0:00:01.143) 0:43:48.717 ********* 2026-03-24 05:33:12.488920 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.488944 | orchestrator | 2026-03-24 05:33:12.488967 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:33:12.488985 | orchestrator | Tuesday 24 March 2026 05:33:08 +0000 (0:00:01.087) 0:43:49.805 ********* 2026-03-24 05:33:12.489002 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:12.489020 | orchestrator | 2026-03-24 05:33:12.489036 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:33:12.489052 | orchestrator | Tuesday 24 March 2026 05:33:10 +0000 (0:00:01.132) 0:43:50.938 ********* 2026-03-24 05:33:12.489069 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:12.489086 | orchestrator | 2026-03-24 05:33:12.489105 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:33:12.489124 | orchestrator | Tuesday 24 March 2026 05:33:11 +0000 (0:00:01.087) 0:43:52.025 ********* 2026-03-24 05:33:12.489142 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:12.489159 | orchestrator | 2026-03-24 05:33:12.489177 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:33:12.489188 | orchestrator | Tuesday 24 March 2026 05:33:12 +0000 (0:00:01.140) 0:43:53.166 ********* 2026-03-24 05:33:12.489228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:12.489251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}})  2026-03-24 05:33:12.489284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:33:12.489311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}})  2026-03-24 05:33:13.582820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:13.582921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:13.582940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:33:13.582955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:13.582967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:33:13.583004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:13.583032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}})  2026-03-24 05:33:13.583065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}})  2026-03-24 05:33:13.583079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:13.583094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:33:13.583121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:13.583133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:33:13.583153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:33:13.798582 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:13.798704 | orchestrator | 2026-03-24 05:33:13.798731 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:33:13.798745 | orchestrator | Tuesday 24 March 2026 05:33:13 +0000 (0:00:01.307) 0:43:54.474 ********* 2026-03-24 05:33:13.798759 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798775 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798930 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798942 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.798990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:13.799011 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:26.644881 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:26.645015 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:26.645070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:26.645103 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:26.645115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:26.645126 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:33:26.645144 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:26.645157 | orchestrator | 2026-03-24 05:33:26.645168 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:33:26.645179 | orchestrator | Tuesday 24 March 2026 05:33:14 +0000 (0:00:01.357) 0:43:55.831 ********* 2026-03-24 05:33:26.645189 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:26.645199 | orchestrator | 2026-03-24 05:33:26.645308 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:33:26.645319 | orchestrator | Tuesday 24 March 2026 05:33:16 +0000 (0:00:01.452) 0:43:57.284 ********* 2026-03-24 05:33:26.645329 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:26.645339 | orchestrator | 2026-03-24 05:33:26.645349 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:33:26.645358 | orchestrator | Tuesday 24 March 2026 05:33:17 +0000 (0:00:01.090) 0:43:58.375 ********* 2026-03-24 05:33:26.645368 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:33:26.645377 | orchestrator | 2026-03-24 05:33:26.645388 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:33:26.645399 | orchestrator | Tuesday 24 March 2026 05:33:18 +0000 (0:00:01.433) 0:43:59.809 ********* 2026-03-24 05:33:26.645410 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:26.645421 | orchestrator | 2026-03-24 05:33:26.645438 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:33:26.645449 | orchestrator | Tuesday 24 March 2026 05:33:20 +0000 (0:00:01.110) 0:44:00.919 ********* 2026-03-24 05:33:26.645460 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:26.645471 | orchestrator | 2026-03-24 05:33:26.645482 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:33:26.645493 | orchestrator | Tuesday 24 March 2026 05:33:21 +0000 (0:00:01.228) 0:44:02.147 ********* 2026-03-24 05:33:26.645503 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:26.645513 | orchestrator | 2026-03-24 05:33:26.645522 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:33:26.645532 | orchestrator | Tuesday 24 March 2026 05:33:22 +0000 (0:00:01.115) 0:44:03.263 ********* 2026-03-24 05:33:26.645541 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-24 05:33:26.645551 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-24 05:33:26.645561 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-24 05:33:26.645570 | orchestrator | 2026-03-24 05:33:26.645579 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:33:26.645589 | orchestrator | Tuesday 24 March 2026 05:33:24 +0000 (0:00:02.000) 0:44:05.264 ********* 2026-03-24 05:33:26.645598 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 05:33:26.645608 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 05:33:26.645617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 05:33:26.645626 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:33:26.645636 | orchestrator | 2026-03-24 05:33:26.645645 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:33:26.645655 | orchestrator | Tuesday 24 March 2026 05:33:25 +0000 (0:00:01.166) 0:44:06.431 ********* 2026-03-24 05:33:26.645664 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-24 05:33:26.645682 | orchestrator | 2026-03-24 05:33:26.645701 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:34:08.249618 | orchestrator | Tuesday 24 March 2026 05:33:26 +0000 (0:00:01.101) 0:44:07.532 ********* 2026-03-24 05:34:08.249728 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.249744 | orchestrator | 2026-03-24 05:34:08.249756 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:34:08.249767 | orchestrator | Tuesday 24 March 2026 05:33:27 +0000 (0:00:01.123) 0:44:08.656 ********* 2026-03-24 05:34:08.249777 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.249787 | orchestrator | 2026-03-24 05:34:08.249797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:34:08.249807 | orchestrator | Tuesday 24 March 2026 05:33:28 +0000 (0:00:01.118) 0:44:09.775 ********* 2026-03-24 05:34:08.249817 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.249826 | orchestrator | 2026-03-24 05:34:08.249836 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:34:08.249846 | orchestrator | Tuesday 24 March 2026 05:33:30 +0000 (0:00:01.139) 0:44:10.914 ********* 2026-03-24 05:34:08.249873 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.249884 | orchestrator | 2026-03-24 05:34:08.249904 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:34:08.249914 | orchestrator | Tuesday 24 March 2026 05:33:31 +0000 (0:00:01.191) 0:44:12.106 ********* 2026-03-24 05:34:08.249924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:34:08.249935 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:34:08.249945 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:34:08.249955 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.249964 | orchestrator | 2026-03-24 05:34:08.249974 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:34:08.249984 | orchestrator | Tuesday 24 March 2026 05:33:32 +0000 (0:00:01.394) 0:44:13.500 ********* 2026-03-24 05:34:08.249993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:34:08.250003 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:34:08.250013 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:34:08.250094 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.250112 | orchestrator | 2026-03-24 05:34:08.250129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:34:08.250148 | orchestrator | Tuesday 24 March 2026 05:33:34 +0000 (0:00:01.410) 0:44:14.911 ********* 2026-03-24 05:34:08.250165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:34:08.250181 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:34:08.250192 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:34:08.250203 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.250214 | orchestrator | 2026-03-24 05:34:08.250247 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:34:08.250259 | orchestrator | Tuesday 24 March 2026 05:33:35 +0000 (0:00:01.404) 0:44:16.315 ********* 2026-03-24 05:34:08.250270 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.250281 | orchestrator | 2026-03-24 05:34:08.250291 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:34:08.250300 | orchestrator | Tuesday 24 March 2026 05:33:36 +0000 (0:00:01.174) 0:44:17.490 ********* 2026-03-24 05:34:08.250310 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 05:34:08.250320 | orchestrator | 2026-03-24 05:34:08.250329 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:34:08.250339 | orchestrator | Tuesday 24 March 2026 05:33:38 +0000 (0:00:01.719) 0:44:19.209 ********* 2026-03-24 05:34:08.250349 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:34:08.250402 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:34:08.250412 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:34:08.250423 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:34:08.250432 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:34:08.250442 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-24 05:34:08.250451 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:34:08.250461 | orchestrator | 2026-03-24 05:34:08.250470 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:34:08.250480 | orchestrator | Tuesday 24 March 2026 05:33:40 +0000 (0:00:02.084) 0:44:21.294 ********* 2026-03-24 05:34:08.250489 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:34:08.250499 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:34:08.250508 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:34:08.250518 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:34:08.250527 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:34:08.250537 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-24 05:34:08.250546 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:34:08.250555 | orchestrator | 2026-03-24 05:34:08.250565 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-24 05:34:08.250574 | orchestrator | Tuesday 24 March 2026 05:33:42 +0000 (0:00:02.133) 0:44:23.427 ********* 2026-03-24 05:34:08.250584 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.250594 | orchestrator | 2026-03-24 05:34:08.250603 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-24 05:34:08.250629 | orchestrator | Tuesday 24 March 2026 05:33:43 +0000 (0:00:01.140) 0:44:24.568 ********* 2026-03-24 05:34:08.250639 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.250649 | orchestrator | 2026-03-24 05:34:08.250658 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-24 05:34:08.250668 | orchestrator | Tuesday 24 March 2026 05:33:44 +0000 (0:00:00.773) 0:44:25.342 ********* 2026-03-24 05:34:08.250677 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.250687 | orchestrator | 2026-03-24 05:34:08.250696 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-24 05:34:08.250706 | orchestrator | Tuesday 24 March 2026 05:33:45 +0000 (0:00:00.887) 0:44:26.229 ********* 2026-03-24 05:34:08.250715 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-24 05:34:08.250725 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-24 05:34:08.250735 | orchestrator | 2026-03-24 05:34:08.250744 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:34:08.250754 | orchestrator | Tuesday 24 March 2026 05:33:49 +0000 (0:00:03.917) 0:44:30.147 ********* 2026-03-24 05:34:08.250763 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-24 05:34:08.250773 | orchestrator | 2026-03-24 05:34:08.250783 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:34:08.250792 | orchestrator | Tuesday 24 March 2026 05:33:50 +0000 (0:00:01.118) 0:44:31.265 ********* 2026-03-24 05:34:08.250802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-24 05:34:08.250811 | orchestrator | 2026-03-24 05:34:08.250821 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:34:08.250830 | orchestrator | Tuesday 24 March 2026 05:33:51 +0000 (0:00:01.091) 0:44:32.357 ********* 2026-03-24 05:34:08.250848 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.250858 | orchestrator | 2026-03-24 05:34:08.250868 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:34:08.250877 | orchestrator | Tuesday 24 March 2026 05:33:52 +0000 (0:00:01.118) 0:44:33.475 ********* 2026-03-24 05:34:08.250886 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.250896 | orchestrator | 2026-03-24 05:34:08.250905 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:34:08.250915 | orchestrator | Tuesday 24 March 2026 05:33:54 +0000 (0:00:01.527) 0:44:35.003 ********* 2026-03-24 05:34:08.250924 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.250934 | orchestrator | 2026-03-24 05:34:08.250944 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:34:08.250953 | orchestrator | Tuesday 24 March 2026 05:33:55 +0000 (0:00:01.560) 0:44:36.564 ********* 2026-03-24 05:34:08.250963 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.250972 | orchestrator | 2026-03-24 05:34:08.250982 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:34:08.250991 | orchestrator | Tuesday 24 March 2026 05:33:57 +0000 (0:00:01.535) 0:44:38.100 ********* 2026-03-24 05:34:08.251001 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.251010 | orchestrator | 2026-03-24 05:34:08.251020 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:34:08.251030 | orchestrator | Tuesday 24 March 2026 05:33:58 +0000 (0:00:01.126) 0:44:39.227 ********* 2026-03-24 05:34:08.251039 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.251049 | orchestrator | 2026-03-24 05:34:08.251058 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:34:08.251068 | orchestrator | Tuesday 24 March 2026 05:33:59 +0000 (0:00:01.111) 0:44:40.339 ********* 2026-03-24 05:34:08.251077 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.251086 | orchestrator | 2026-03-24 05:34:08.251096 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:34:08.251111 | orchestrator | Tuesday 24 March 2026 05:34:00 +0000 (0:00:01.102) 0:44:41.441 ********* 2026-03-24 05:34:08.251120 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.251130 | orchestrator | 2026-03-24 05:34:08.251139 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:34:08.251149 | orchestrator | Tuesday 24 March 2026 05:34:02 +0000 (0:00:01.498) 0:44:42.939 ********* 2026-03-24 05:34:08.251158 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.251168 | orchestrator | 2026-03-24 05:34:08.251177 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:34:08.251187 | orchestrator | Tuesday 24 March 2026 05:34:03 +0000 (0:00:01.501) 0:44:44.441 ********* 2026-03-24 05:34:08.251196 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.251206 | orchestrator | 2026-03-24 05:34:08.251215 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:34:08.251241 | orchestrator | Tuesday 24 March 2026 05:34:04 +0000 (0:00:00.755) 0:44:45.196 ********* 2026-03-24 05:34:08.251251 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.251260 | orchestrator | 2026-03-24 05:34:08.251281 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:34:08.251300 | orchestrator | Tuesday 24 March 2026 05:34:05 +0000 (0:00:00.784) 0:44:45.981 ********* 2026-03-24 05:34:08.251310 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.251319 | orchestrator | 2026-03-24 05:34:08.251329 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:34:08.251338 | orchestrator | Tuesday 24 March 2026 05:34:05 +0000 (0:00:00.817) 0:44:46.798 ********* 2026-03-24 05:34:08.251348 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.251357 | orchestrator | 2026-03-24 05:34:08.251366 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:34:08.251376 | orchestrator | Tuesday 24 March 2026 05:34:06 +0000 (0:00:00.788) 0:44:47.587 ********* 2026-03-24 05:34:08.251385 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:08.251404 | orchestrator | 2026-03-24 05:34:08.251413 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:34:08.251423 | orchestrator | Tuesday 24 March 2026 05:34:07 +0000 (0:00:00.789) 0:44:48.377 ********* 2026-03-24 05:34:08.251433 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:08.251442 | orchestrator | 2026-03-24 05:34:08.251459 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:34:48.425332 | orchestrator | Tuesday 24 March 2026 05:34:08 +0000 (0:00:00.759) 0:44:49.137 ********* 2026-03-24 05:34:48.425470 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425490 | orchestrator | 2026-03-24 05:34:48.425504 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:34:48.425516 | orchestrator | Tuesday 24 March 2026 05:34:08 +0000 (0:00:00.762) 0:44:49.899 ********* 2026-03-24 05:34:48.425568 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425583 | orchestrator | 2026-03-24 05:34:48.425595 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:34:48.425606 | orchestrator | Tuesday 24 March 2026 05:34:09 +0000 (0:00:00.758) 0:44:50.658 ********* 2026-03-24 05:34:48.425618 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.425630 | orchestrator | 2026-03-24 05:34:48.425641 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:34:48.425652 | orchestrator | Tuesday 24 March 2026 05:34:10 +0000 (0:00:00.810) 0:44:51.468 ********* 2026-03-24 05:34:48.425663 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.425674 | orchestrator | 2026-03-24 05:34:48.425685 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:34:48.425696 | orchestrator | Tuesday 24 March 2026 05:34:11 +0000 (0:00:00.772) 0:44:52.241 ********* 2026-03-24 05:34:48.425707 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425718 | orchestrator | 2026-03-24 05:34:48.425729 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:34:48.425739 | orchestrator | Tuesday 24 March 2026 05:34:12 +0000 (0:00:00.755) 0:44:52.997 ********* 2026-03-24 05:34:48.425750 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425761 | orchestrator | 2026-03-24 05:34:48.425772 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:34:48.425783 | orchestrator | Tuesday 24 March 2026 05:34:12 +0000 (0:00:00.753) 0:44:53.750 ********* 2026-03-24 05:34:48.425794 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425822 | orchestrator | 2026-03-24 05:34:48.425833 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:34:48.425858 | orchestrator | Tuesday 24 March 2026 05:34:14 +0000 (0:00:01.223) 0:44:54.974 ********* 2026-03-24 05:34:48.425870 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425882 | orchestrator | 2026-03-24 05:34:48.425894 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:34:48.425906 | orchestrator | Tuesday 24 March 2026 05:34:14 +0000 (0:00:00.743) 0:44:55.717 ********* 2026-03-24 05:34:48.425918 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425930 | orchestrator | 2026-03-24 05:34:48.425943 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:34:48.425956 | orchestrator | Tuesday 24 March 2026 05:34:15 +0000 (0:00:00.764) 0:44:56.481 ********* 2026-03-24 05:34:48.425968 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.425980 | orchestrator | 2026-03-24 05:34:48.425992 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:34:48.426004 | orchestrator | Tuesday 24 March 2026 05:34:16 +0000 (0:00:00.742) 0:44:57.224 ********* 2026-03-24 05:34:48.426071 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426087 | orchestrator | 2026-03-24 05:34:48.426099 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:34:48.426112 | orchestrator | Tuesday 24 March 2026 05:34:17 +0000 (0:00:00.755) 0:44:57.979 ********* 2026-03-24 05:34:48.426152 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426175 | orchestrator | 2026-03-24 05:34:48.426188 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:34:48.426201 | orchestrator | Tuesday 24 March 2026 05:34:17 +0000 (0:00:00.775) 0:44:58.755 ********* 2026-03-24 05:34:48.426212 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426223 | orchestrator | 2026-03-24 05:34:48.426289 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:34:48.426301 | orchestrator | Tuesday 24 March 2026 05:34:18 +0000 (0:00:00.754) 0:44:59.510 ********* 2026-03-24 05:34:48.426312 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426323 | orchestrator | 2026-03-24 05:34:48.426334 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:34:48.426348 | orchestrator | Tuesday 24 March 2026 05:34:19 +0000 (0:00:00.803) 0:45:00.314 ********* 2026-03-24 05:34:48.426367 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426384 | orchestrator | 2026-03-24 05:34:48.426401 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:34:48.426419 | orchestrator | Tuesday 24 March 2026 05:34:20 +0000 (0:00:00.760) 0:45:01.074 ********* 2026-03-24 05:34:48.426436 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426454 | orchestrator | 2026-03-24 05:34:48.426473 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:34:48.426491 | orchestrator | Tuesday 24 March 2026 05:34:20 +0000 (0:00:00.820) 0:45:01.895 ********* 2026-03-24 05:34:48.426502 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.426512 | orchestrator | 2026-03-24 05:34:48.426524 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:34:48.426535 | orchestrator | Tuesday 24 March 2026 05:34:22 +0000 (0:00:01.534) 0:45:03.429 ********* 2026-03-24 05:34:48.426545 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.426556 | orchestrator | 2026-03-24 05:34:48.426567 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:34:48.426577 | orchestrator | Tuesday 24 March 2026 05:34:24 +0000 (0:00:01.955) 0:45:05.384 ********* 2026-03-24 05:34:48.426588 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-24 05:34:48.426600 | orchestrator | 2026-03-24 05:34:48.426616 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:34:48.426633 | orchestrator | Tuesday 24 March 2026 05:34:25 +0000 (0:00:01.130) 0:45:06.515 ********* 2026-03-24 05:34:48.426661 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426681 | orchestrator | 2026-03-24 05:34:48.426698 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:34:48.426740 | orchestrator | Tuesday 24 March 2026 05:34:26 +0000 (0:00:01.113) 0:45:07.629 ********* 2026-03-24 05:34:48.426758 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426777 | orchestrator | 2026-03-24 05:34:48.426795 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:34:48.426814 | orchestrator | Tuesday 24 March 2026 05:34:27 +0000 (0:00:01.106) 0:45:08.736 ********* 2026-03-24 05:34:48.426832 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:34:48.426843 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:34:48.426854 | orchestrator | 2026-03-24 05:34:48.426864 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:34:48.426875 | orchestrator | Tuesday 24 March 2026 05:34:29 +0000 (0:00:01.832) 0:45:10.568 ********* 2026-03-24 05:34:48.426886 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.426896 | orchestrator | 2026-03-24 05:34:48.426907 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:34:48.426918 | orchestrator | Tuesday 24 March 2026 05:34:31 +0000 (0:00:01.431) 0:45:12.000 ********* 2026-03-24 05:34:48.426929 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426939 | orchestrator | 2026-03-24 05:34:48.426964 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:34:48.426975 | orchestrator | Tuesday 24 March 2026 05:34:32 +0000 (0:00:01.135) 0:45:13.135 ********* 2026-03-24 05:34:48.426986 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.426996 | orchestrator | 2026-03-24 05:34:48.427007 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:34:48.427017 | orchestrator | Tuesday 24 March 2026 05:34:33 +0000 (0:00:00.811) 0:45:13.947 ********* 2026-03-24 05:34:48.427028 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427039 | orchestrator | 2026-03-24 05:34:48.427049 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:34:48.427060 | orchestrator | Tuesday 24 March 2026 05:34:33 +0000 (0:00:00.758) 0:45:14.705 ********* 2026-03-24 05:34:48.427071 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-24 05:34:48.427081 | orchestrator | 2026-03-24 05:34:48.427092 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:34:48.427102 | orchestrator | Tuesday 24 March 2026 05:34:34 +0000 (0:00:01.119) 0:45:15.825 ********* 2026-03-24 05:34:48.427113 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.427124 | orchestrator | 2026-03-24 05:34:48.427135 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:34:48.427145 | orchestrator | Tuesday 24 March 2026 05:34:36 +0000 (0:00:01.741) 0:45:17.566 ********* 2026-03-24 05:34:48.427156 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:34:48.427167 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:34:48.427177 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:34:48.427188 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427199 | orchestrator | 2026-03-24 05:34:48.427209 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:34:48.427220 | orchestrator | Tuesday 24 March 2026 05:34:37 +0000 (0:00:01.121) 0:45:18.688 ********* 2026-03-24 05:34:48.427276 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427292 | orchestrator | 2026-03-24 05:34:48.427303 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:34:48.427313 | orchestrator | Tuesday 24 March 2026 05:34:38 +0000 (0:00:01.112) 0:45:19.800 ********* 2026-03-24 05:34:48.427324 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427334 | orchestrator | 2026-03-24 05:34:48.427353 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:34:48.427364 | orchestrator | Tuesday 24 March 2026 05:34:40 +0000 (0:00:01.156) 0:45:20.956 ********* 2026-03-24 05:34:48.427375 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427385 | orchestrator | 2026-03-24 05:34:48.427396 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:34:48.427407 | orchestrator | Tuesday 24 March 2026 05:34:41 +0000 (0:00:01.146) 0:45:22.102 ********* 2026-03-24 05:34:48.427417 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427428 | orchestrator | 2026-03-24 05:34:48.427439 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:34:48.427449 | orchestrator | Tuesday 24 March 2026 05:34:42 +0000 (0:00:01.114) 0:45:23.217 ********* 2026-03-24 05:34:48.427460 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427471 | orchestrator | 2026-03-24 05:34:48.427481 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:34:48.427492 | orchestrator | Tuesday 24 March 2026 05:34:43 +0000 (0:00:00.797) 0:45:24.015 ********* 2026-03-24 05:34:48.427503 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.427513 | orchestrator | 2026-03-24 05:34:48.427524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:34:48.427535 | orchestrator | Tuesday 24 March 2026 05:34:45 +0000 (0:00:02.156) 0:45:26.172 ********* 2026-03-24 05:34:48.427553 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:34:48.427571 | orchestrator | 2026-03-24 05:34:48.427589 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:34:48.427606 | orchestrator | Tuesday 24 March 2026 05:34:46 +0000 (0:00:00.782) 0:45:26.954 ********* 2026-03-24 05:34:48.427624 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-24 05:34:48.427641 | orchestrator | 2026-03-24 05:34:48.427660 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:34:48.427678 | orchestrator | Tuesday 24 March 2026 05:34:47 +0000 (0:00:01.205) 0:45:28.160 ********* 2026-03-24 05:34:48.427698 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:34:48.427717 | orchestrator | 2026-03-24 05:34:48.427736 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:34:48.427761 | orchestrator | Tuesday 24 March 2026 05:34:48 +0000 (0:00:01.152) 0:45:29.313 ********* 2026-03-24 05:35:32.354326 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.354459 | orchestrator | 2026-03-24 05:35:32.354493 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:35:32.354514 | orchestrator | Tuesday 24 March 2026 05:34:49 +0000 (0:00:01.156) 0:45:30.470 ********* 2026-03-24 05:35:32.354532 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.354586 | orchestrator | 2026-03-24 05:35:32.354606 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:35:32.354625 | orchestrator | Tuesday 24 March 2026 05:34:50 +0000 (0:00:01.168) 0:45:31.638 ********* 2026-03-24 05:35:32.354637 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.354648 | orchestrator | 2026-03-24 05:35:32.354659 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:35:32.354670 | orchestrator | Tuesday 24 March 2026 05:34:51 +0000 (0:00:01.143) 0:45:32.782 ********* 2026-03-24 05:35:32.354681 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.354692 | orchestrator | 2026-03-24 05:35:32.354703 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:35:32.354714 | orchestrator | Tuesday 24 March 2026 05:34:52 +0000 (0:00:01.117) 0:45:33.900 ********* 2026-03-24 05:35:32.354725 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.354736 | orchestrator | 2026-03-24 05:35:32.354747 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:35:32.354757 | orchestrator | Tuesday 24 March 2026 05:34:54 +0000 (0:00:01.129) 0:45:35.029 ********* 2026-03-24 05:35:32.354768 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.354779 | orchestrator | 2026-03-24 05:35:32.354790 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:35:32.354801 | orchestrator | Tuesday 24 March 2026 05:34:55 +0000 (0:00:01.119) 0:45:36.149 ********* 2026-03-24 05:35:32.354812 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.354825 | orchestrator | 2026-03-24 05:35:32.354837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:35:32.354849 | orchestrator | Tuesday 24 March 2026 05:34:56 +0000 (0:00:01.155) 0:45:37.304 ********* 2026-03-24 05:35:32.354861 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:35:32.354875 | orchestrator | 2026-03-24 05:35:32.354887 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:35:32.354899 | orchestrator | Tuesday 24 March 2026 05:34:57 +0000 (0:00:00.787) 0:45:38.092 ********* 2026-03-24 05:35:32.354912 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-24 05:35:32.354924 | orchestrator | 2026-03-24 05:35:32.354938 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:35:32.354950 | orchestrator | Tuesday 24 March 2026 05:34:58 +0000 (0:00:01.115) 0:45:39.207 ********* 2026-03-24 05:35:32.354963 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-24 05:35:32.354976 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-24 05:35:32.355015 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-24 05:35:32.355027 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-24 05:35:32.355039 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-24 05:35:32.355051 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-24 05:35:32.355063 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-24 05:35:32.355076 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:35:32.355088 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:35:32.355101 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:35:32.355128 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:35:32.355139 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:35:32.355150 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:35:32.355160 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:35:32.355171 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-24 05:35:32.355182 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-24 05:35:32.355192 | orchestrator | 2026-03-24 05:35:32.355203 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:35:32.355214 | orchestrator | Tuesday 24 March 2026 05:35:04 +0000 (0:00:06.329) 0:45:45.536 ********* 2026-03-24 05:35:32.355224 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-24 05:35:32.355235 | orchestrator | 2026-03-24 05:35:32.355273 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:35:32.355285 | orchestrator | Tuesday 24 March 2026 05:35:05 +0000 (0:00:01.096) 0:45:46.633 ********* 2026-03-24 05:35:32.355296 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:35:32.355308 | orchestrator | 2026-03-24 05:35:32.355319 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:35:32.355330 | orchestrator | Tuesday 24 March 2026 05:35:07 +0000 (0:00:01.509) 0:45:48.142 ********* 2026-03-24 05:35:32.355341 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:35:32.355352 | orchestrator | 2026-03-24 05:35:32.355362 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:35:32.355373 | orchestrator | Tuesday 24 March 2026 05:35:08 +0000 (0:00:01.597) 0:45:49.740 ********* 2026-03-24 05:35:32.355384 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355399 | orchestrator | 2026-03-24 05:35:32.355418 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:35:32.355459 | orchestrator | Tuesday 24 March 2026 05:35:09 +0000 (0:00:00.779) 0:45:50.519 ********* 2026-03-24 05:35:32.355477 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355494 | orchestrator | 2026-03-24 05:35:32.355511 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:35:32.355529 | orchestrator | Tuesday 24 March 2026 05:35:10 +0000 (0:00:00.794) 0:45:51.314 ********* 2026-03-24 05:35:32.355546 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355565 | orchestrator | 2026-03-24 05:35:32.355582 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:35:32.355600 | orchestrator | Tuesday 24 March 2026 05:35:11 +0000 (0:00:00.755) 0:45:52.069 ********* 2026-03-24 05:35:32.355618 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355637 | orchestrator | 2026-03-24 05:35:32.355651 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:35:32.355662 | orchestrator | Tuesday 24 March 2026 05:35:11 +0000 (0:00:00.751) 0:45:52.821 ********* 2026-03-24 05:35:32.355673 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355695 | orchestrator | 2026-03-24 05:35:32.355706 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:35:32.355718 | orchestrator | Tuesday 24 March 2026 05:35:12 +0000 (0:00:00.767) 0:45:53.589 ********* 2026-03-24 05:35:32.355728 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355739 | orchestrator | 2026-03-24 05:35:32.355750 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:35:32.355761 | orchestrator | Tuesday 24 March 2026 05:35:13 +0000 (0:00:00.796) 0:45:54.385 ********* 2026-03-24 05:35:32.355771 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355782 | orchestrator | 2026-03-24 05:35:32.355793 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:35:32.355804 | orchestrator | Tuesday 24 March 2026 05:35:14 +0000 (0:00:00.772) 0:45:55.158 ********* 2026-03-24 05:35:32.355814 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355825 | orchestrator | 2026-03-24 05:35:32.355836 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:35:32.355847 | orchestrator | Tuesday 24 March 2026 05:35:15 +0000 (0:00:00.777) 0:45:55.936 ********* 2026-03-24 05:35:32.355857 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355868 | orchestrator | 2026-03-24 05:35:32.355879 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:35:32.355890 | orchestrator | Tuesday 24 March 2026 05:35:15 +0000 (0:00:00.784) 0:45:56.721 ********* 2026-03-24 05:35:32.355900 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.355911 | orchestrator | 2026-03-24 05:35:32.355922 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:35:32.355933 | orchestrator | Tuesday 24 March 2026 05:35:16 +0000 (0:00:00.765) 0:45:57.486 ********* 2026-03-24 05:35:32.355944 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:35:32.355954 | orchestrator | 2026-03-24 05:35:32.355965 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:35:32.355976 | orchestrator | Tuesday 24 March 2026 05:35:17 +0000 (0:00:00.816) 0:45:58.302 ********* 2026-03-24 05:35:32.355987 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:35:32.355998 | orchestrator | 2026-03-24 05:35:32.356008 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:35:32.356019 | orchestrator | Tuesday 24 March 2026 05:35:21 +0000 (0:00:04.163) 0:46:02.466 ********* 2026-03-24 05:35:32.356030 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:35:32.356041 | orchestrator | 2026-03-24 05:35:32.356059 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:35:32.356070 | orchestrator | Tuesday 24 March 2026 05:35:22 +0000 (0:00:00.833) 0:46:03.299 ********* 2026-03-24 05:35:32.356083 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-24 05:35:32.356098 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-24 05:35:32.356110 | orchestrator | 2026-03-24 05:35:32.356121 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:35:32.356132 | orchestrator | Tuesday 24 March 2026 05:35:29 +0000 (0:00:07.585) 0:46:10.885 ********* 2026-03-24 05:35:32.356143 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.356160 | orchestrator | 2026-03-24 05:35:32.356171 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:35:32.356182 | orchestrator | Tuesday 24 March 2026 05:35:30 +0000 (0:00:00.801) 0:46:11.687 ********* 2026-03-24 05:35:32.356193 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.356203 | orchestrator | 2026-03-24 05:35:32.356214 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:35:32.356226 | orchestrator | Tuesday 24 March 2026 05:35:31 +0000 (0:00:00.765) 0:46:12.453 ********* 2026-03-24 05:35:32.356236 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:35:32.356316 | orchestrator | 2026-03-24 05:35:32.356336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:35:32.356362 | orchestrator | Tuesday 24 March 2026 05:35:32 +0000 (0:00:00.787) 0:46:13.240 ********* 2026-03-24 05:36:17.035728 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.035844 | orchestrator | 2026-03-24 05:36:17.035860 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:36:17.035874 | orchestrator | Tuesday 24 March 2026 05:35:33 +0000 (0:00:00.794) 0:46:14.034 ********* 2026-03-24 05:36:17.035885 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.035896 | orchestrator | 2026-03-24 05:36:17.035907 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:36:17.035918 | orchestrator | Tuesday 24 March 2026 05:35:33 +0000 (0:00:00.769) 0:46:14.804 ********* 2026-03-24 05:36:17.035929 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.035941 | orchestrator | 2026-03-24 05:36:17.035952 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:36:17.035963 | orchestrator | Tuesday 24 March 2026 05:35:34 +0000 (0:00:00.859) 0:46:15.663 ********* 2026-03-24 05:36:17.035974 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:36:17.035985 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:36:17.036000 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:36:17.036012 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.036022 | orchestrator | 2026-03-24 05:36:17.036033 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:36:17.036044 | orchestrator | Tuesday 24 March 2026 05:35:36 +0000 (0:00:01.405) 0:46:17.069 ********* 2026-03-24 05:36:17.036054 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:36:17.036065 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:36:17.036076 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:36:17.036086 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.036097 | orchestrator | 2026-03-24 05:36:17.036108 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:36:17.036119 | orchestrator | Tuesday 24 March 2026 05:35:37 +0000 (0:00:01.397) 0:46:18.466 ********* 2026-03-24 05:36:17.036130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:36:17.036141 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:36:17.036151 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:36:17.036162 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.036172 | orchestrator | 2026-03-24 05:36:17.036183 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:36:17.036194 | orchestrator | Tuesday 24 March 2026 05:35:38 +0000 (0:00:00.879) 0:46:19.346 ********* 2026-03-24 05:36:17.036205 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.036215 | orchestrator | 2026-03-24 05:36:17.036226 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:36:17.036237 | orchestrator | Tuesday 24 March 2026 05:35:39 +0000 (0:00:00.625) 0:46:19.972 ********* 2026-03-24 05:36:17.036248 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 05:36:17.036284 | orchestrator | 2026-03-24 05:36:17.036322 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:36:17.036336 | orchestrator | Tuesday 24 March 2026 05:35:39 +0000 (0:00:00.834) 0:46:20.807 ********* 2026-03-24 05:36:17.036349 | orchestrator | changed: [testbed-node-5] 2026-03-24 05:36:17.036361 | orchestrator | 2026-03-24 05:36:17.036373 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-24 05:36:17.036385 | orchestrator | Tuesday 24 March 2026 05:35:41 +0000 (0:00:01.353) 0:46:22.161 ********* 2026-03-24 05:36:17.036397 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.036410 | orchestrator | 2026-03-24 05:36:17.036437 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-24 05:36:17.036450 | orchestrator | Tuesday 24 March 2026 05:35:42 +0000 (0:00:00.774) 0:46:22.935 ********* 2026-03-24 05:36:17.036463 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:36:17.036476 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:36:17.036488 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:36:17.036501 | orchestrator | 2026-03-24 05:36:17.036513 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-24 05:36:17.036526 | orchestrator | Tuesday 24 March 2026 05:35:43 +0000 (0:00:01.402) 0:46:24.337 ********* 2026-03-24 05:36:17.036538 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-03-24 05:36:17.036551 | orchestrator | 2026-03-24 05:36:17.036563 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-24 05:36:17.036575 | orchestrator | Tuesday 24 March 2026 05:35:44 +0000 (0:00:01.055) 0:46:25.393 ********* 2026-03-24 05:36:17.036587 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.036600 | orchestrator | 2026-03-24 05:36:17.036612 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-24 05:36:17.036625 | orchestrator | Tuesday 24 March 2026 05:35:45 +0000 (0:00:01.091) 0:46:26.484 ********* 2026-03-24 05:36:17.036637 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.036650 | orchestrator | 2026-03-24 05:36:17.036660 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-24 05:36:17.036671 | orchestrator | Tuesday 24 March 2026 05:35:46 +0000 (0:00:01.093) 0:46:27.578 ********* 2026-03-24 05:36:17.036681 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.036692 | orchestrator | 2026-03-24 05:36:17.036703 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-24 05:36:17.036714 | orchestrator | Tuesday 24 March 2026 05:35:48 +0000 (0:00:01.399) 0:46:28.977 ********* 2026-03-24 05:36:17.036724 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.036735 | orchestrator | 2026-03-24 05:36:17.036745 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-24 05:36:17.036756 | orchestrator | Tuesday 24 March 2026 05:35:49 +0000 (0:00:01.122) 0:46:30.100 ********* 2026-03-24 05:36:17.036784 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-24 05:36:17.036797 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-24 05:36:17.036808 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-24 05:36:17.036819 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-24 05:36:17.036829 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-24 05:36:17.036840 | orchestrator | 2026-03-24 05:36:17.036851 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-24 05:36:17.036861 | orchestrator | Tuesday 24 March 2026 05:35:51 +0000 (0:00:02.484) 0:46:32.585 ********* 2026-03-24 05:36:17.036872 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.036882 | orchestrator | 2026-03-24 05:36:17.036893 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-24 05:36:17.036913 | orchestrator | Tuesday 24 March 2026 05:35:52 +0000 (0:00:00.726) 0:46:33.311 ********* 2026-03-24 05:36:17.036924 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-03-24 05:36:17.036934 | orchestrator | 2026-03-24 05:36:17.036945 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-24 05:36:17.036955 | orchestrator | Tuesday 24 March 2026 05:35:53 +0000 (0:00:01.102) 0:46:34.414 ********* 2026-03-24 05:36:17.036966 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-24 05:36:17.036977 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-24 05:36:17.036987 | orchestrator | 2026-03-24 05:36:17.036998 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-24 05:36:17.037008 | orchestrator | Tuesday 24 March 2026 05:35:55 +0000 (0:00:01.752) 0:46:36.166 ********* 2026-03-24 05:36:17.037019 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:36:17.037029 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 05:36:17.037040 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:36:17.037051 | orchestrator | 2026-03-24 05:36:17.037061 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:36:17.037072 | orchestrator | Tuesday 24 March 2026 05:35:58 +0000 (0:00:03.227) 0:46:39.394 ********* 2026-03-24 05:36:17.037082 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-24 05:36:17.037093 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 05:36:17.037104 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.037115 | orchestrator | 2026-03-24 05:36:17.037125 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-24 05:36:17.037136 | orchestrator | Tuesday 24 March 2026 05:36:00 +0000 (0:00:01.642) 0:46:41.037 ********* 2026-03-24 05:36:17.037146 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.037157 | orchestrator | 2026-03-24 05:36:17.037168 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-24 05:36:17.037178 | orchestrator | Tuesday 24 March 2026 05:36:01 +0000 (0:00:00.869) 0:46:41.906 ********* 2026-03-24 05:36:17.037189 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.037200 | orchestrator | 2026-03-24 05:36:17.037210 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-24 05:36:17.037221 | orchestrator | Tuesday 24 March 2026 05:36:01 +0000 (0:00:00.757) 0:46:42.663 ********* 2026-03-24 05:36:17.037231 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.037242 | orchestrator | 2026-03-24 05:36:17.037303 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-24 05:36:17.037316 | orchestrator | Tuesday 24 March 2026 05:36:02 +0000 (0:00:00.778) 0:46:43.442 ********* 2026-03-24 05:36:17.037327 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-03-24 05:36:17.037338 | orchestrator | 2026-03-24 05:36:17.037348 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-24 05:36:17.037359 | orchestrator | Tuesday 24 March 2026 05:36:03 +0000 (0:00:01.198) 0:46:44.640 ********* 2026-03-24 05:36:17.037370 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.037381 | orchestrator | 2026-03-24 05:36:17.037392 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-24 05:36:17.037403 | orchestrator | Tuesday 24 March 2026 05:36:05 +0000 (0:00:01.453) 0:46:46.093 ********* 2026-03-24 05:36:17.037413 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.037424 | orchestrator | 2026-03-24 05:36:17.037435 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-24 05:36:17.037446 | orchestrator | Tuesday 24 March 2026 05:36:08 +0000 (0:00:03.516) 0:46:49.610 ********* 2026-03-24 05:36:17.037457 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-03-24 05:36:17.037468 | orchestrator | 2026-03-24 05:36:17.037479 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-24 05:36:17.037497 | orchestrator | Tuesday 24 March 2026 05:36:09 +0000 (0:00:01.094) 0:46:50.705 ********* 2026-03-24 05:36:17.037507 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.037518 | orchestrator | 2026-03-24 05:36:17.037529 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-24 05:36:17.037540 | orchestrator | Tuesday 24 March 2026 05:36:11 +0000 (0:00:01.983) 0:46:52.689 ********* 2026-03-24 05:36:17.037551 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.037562 | orchestrator | 2026-03-24 05:36:17.037572 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-24 05:36:17.037583 | orchestrator | Tuesday 24 March 2026 05:36:13 +0000 (0:00:01.917) 0:46:54.607 ********* 2026-03-24 05:36:17.037594 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:36:17.037605 | orchestrator | 2026-03-24 05:36:17.037615 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-24 05:36:17.037627 | orchestrator | Tuesday 24 March 2026 05:36:15 +0000 (0:00:02.217) 0:46:56.824 ********* 2026-03-24 05:36:17.037638 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:36:17.037649 | orchestrator | 2026-03-24 05:36:17.037667 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-24 05:38:29.988751 | orchestrator | Tuesday 24 March 2026 05:36:17 +0000 (0:00:01.098) 0:46:57.924 ********* 2026-03-24 05:38:29.988871 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.988888 | orchestrator | 2026-03-24 05:38:29.988902 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-24 05:38:29.988913 | orchestrator | Tuesday 24 March 2026 05:36:18 +0000 (0:00:01.113) 0:46:59.038 ********* 2026-03-24 05:38:29.988924 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-24 05:38:29.988936 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-24 05:38:29.988947 | orchestrator | 2026-03-24 05:38:29.988958 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-24 05:38:29.988969 | orchestrator | Tuesday 24 March 2026 05:36:19 +0000 (0:00:01.815) 0:47:00.853 ********* 2026-03-24 05:38:29.988980 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-24 05:38:29.988992 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-24 05:38:29.989003 | orchestrator | 2026-03-24 05:38:29.989013 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-24 05:38:29.989025 | orchestrator | Tuesday 24 March 2026 05:36:22 +0000 (0:00:02.874) 0:47:03.728 ********* 2026-03-24 05:38:29.989036 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-24 05:38:29.989047 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-24 05:38:29.989058 | orchestrator | 2026-03-24 05:38:29.989069 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-24 05:38:29.989080 | orchestrator | Tuesday 24 March 2026 05:36:28 +0000 (0:00:05.548) 0:47:09.277 ********* 2026-03-24 05:38:29.989090 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989101 | orchestrator | 2026-03-24 05:38:29.989112 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-24 05:38:29.989123 | orchestrator | Tuesday 24 March 2026 05:36:29 +0000 (0:00:00.886) 0:47:10.163 ********* 2026-03-24 05:38:29.989134 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-24 05:38:29.989146 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:38:29.989157 | orchestrator | 2026-03-24 05:38:29.989168 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-24 05:38:29.989179 | orchestrator | Tuesday 24 March 2026 05:36:42 +0000 (0:00:13.296) 0:47:23.460 ********* 2026-03-24 05:38:29.989190 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989201 | orchestrator | 2026-03-24 05:38:29.989211 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-24 05:38:29.989222 | orchestrator | Tuesday 24 March 2026 05:36:43 +0000 (0:00:00.881) 0:47:24.341 ********* 2026-03-24 05:38:29.989233 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989269 | orchestrator | 2026-03-24 05:38:29.989281 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-24 05:38:29.989323 | orchestrator | Tuesday 24 March 2026 05:36:44 +0000 (0:00:00.884) 0:47:25.225 ********* 2026-03-24 05:38:29.989343 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989362 | orchestrator | 2026-03-24 05:38:29.989381 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-24 05:38:29.989399 | orchestrator | Tuesday 24 March 2026 05:36:45 +0000 (0:00:00.740) 0:47:25.966 ********* 2026-03-24 05:38:29.989418 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:38:29.989430 | orchestrator | 2026-03-24 05:38:29.989443 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-24 05:38:29.989470 | orchestrator | Tuesday 24 March 2026 05:36:47 +0000 (0:00:01.978) 0:47:27.945 ********* 2026-03-24 05:38:29.989483 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989496 | orchestrator | 2026-03-24 05:38:29.989508 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-24 05:38:29.989520 | orchestrator | Tuesday 24 March 2026 05:36:47 +0000 (0:00:00.781) 0:47:28.727 ********* 2026-03-24 05:38:29.989533 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989545 | orchestrator | 2026-03-24 05:38:29.989557 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-24 05:38:29.989570 | orchestrator | Tuesday 24 March 2026 05:36:48 +0000 (0:00:00.764) 0:47:29.491 ********* 2026-03-24 05:38:29.989581 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989593 | orchestrator | 2026-03-24 05:38:29.989606 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-24 05:38:29.989618 | orchestrator | Tuesday 24 March 2026 05:36:49 +0000 (0:00:00.758) 0:47:30.250 ********* 2026-03-24 05:38:29.989630 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989643 | orchestrator | 2026-03-24 05:38:29.989653 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-24 05:38:29.989664 | orchestrator | Tuesday 24 March 2026 05:36:50 +0000 (0:00:00.773) 0:47:31.024 ********* 2026-03-24 05:38:29.989674 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989685 | orchestrator | 2026-03-24 05:38:29.989696 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-24 05:38:29.989706 | orchestrator | Tuesday 24 March 2026 05:36:50 +0000 (0:00:00.767) 0:47:31.791 ********* 2026-03-24 05:38:29.989717 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989727 | orchestrator | 2026-03-24 05:38:29.989738 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-24 05:38:29.989748 | orchestrator | Tuesday 24 March 2026 05:36:51 +0000 (0:00:00.764) 0:47:32.555 ********* 2026-03-24 05:38:29.989759 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:38:29.989769 | orchestrator | 2026-03-24 05:38:29.989780 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-03-24 05:38:29.989791 | orchestrator | 2026-03-24 05:38:29.989804 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:38:29.989822 | orchestrator | Tuesday 24 March 2026 05:36:53 +0000 (0:00:01.755) 0:47:34.310 ********* 2026-03-24 05:38:29.989840 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:38:29.989857 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:38:29.989875 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:38:29.989892 | orchestrator | 2026-03-24 05:38:29.989909 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:38:29.989949 | orchestrator | Tuesday 24 March 2026 05:36:55 +0000 (0:00:01.667) 0:47:35.977 ********* 2026-03-24 05:38:29.989969 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:38:29.989989 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:38:29.990007 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:38:29.990092 | orchestrator | 2026-03-24 05:38:29.990104 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-03-24 05:38:29.990115 | orchestrator | Tuesday 24 March 2026 05:36:56 +0000 (0:00:01.337) 0:47:37.315 ********* 2026-03-24 05:38:29.990138 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-24 05:38:29.990158 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-24 05:38:29.990180 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-24 05:38:29.990209 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-24 05:38:29.990230 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-24 05:38:29.990249 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-24 05:38:29.990268 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-24 05:38:29.990285 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-24 05:38:29.990328 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-24 05:38:29.990345 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-24 05:38:29.990362 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-24 05:38:29.990381 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-24 05:38:29.990399 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-24 05:38:29.990417 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-24 05:38:29.990434 | orchestrator | 2026-03-24 05:38:29.990454 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-03-24 05:38:29.990471 | orchestrator | Tuesday 24 March 2026 05:38:11 +0000 (0:01:15.222) 0:48:52.537 ********* 2026-03-24 05:38:29.990488 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-24 05:38:29.990500 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-24 05:38:29.990510 | orchestrator | 2026-03-24 05:38:29.990521 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-03-24 05:38:29.990532 | orchestrator | Tuesday 24 March 2026 05:38:16 +0000 (0:00:05.286) 0:48:57.824 ********* 2026-03-24 05:38:29.990557 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:38:29.990584 | orchestrator | 2026-03-24 05:38:29.990606 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-03-24 05:38:29.990623 | orchestrator | 2026-03-24 05:38:29.990643 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:38:29.990662 | orchestrator | Tuesday 24 March 2026 05:38:20 +0000 (0:00:03.316) 0:49:01.141 ********* 2026-03-24 05:38:29.990679 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-24 05:38:29.990696 | orchestrator | 2026-03-24 05:38:29.990707 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:38:29.990718 | orchestrator | Tuesday 24 March 2026 05:38:21 +0000 (0:00:01.107) 0:49:02.248 ********* 2026-03-24 05:38:29.990728 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:29.990739 | orchestrator | 2026-03-24 05:38:29.990750 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:38:29.990760 | orchestrator | Tuesday 24 March 2026 05:38:22 +0000 (0:00:01.448) 0:49:03.697 ********* 2026-03-24 05:38:29.990771 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:29.990781 | orchestrator | 2026-03-24 05:38:29.990792 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:38:29.990802 | orchestrator | Tuesday 24 March 2026 05:38:23 +0000 (0:00:01.124) 0:49:04.821 ********* 2026-03-24 05:38:29.990824 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:29.990835 | orchestrator | 2026-03-24 05:38:29.990846 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:38:29.990857 | orchestrator | Tuesday 24 March 2026 05:38:25 +0000 (0:00:01.519) 0:49:06.341 ********* 2026-03-24 05:38:29.990867 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:29.990878 | orchestrator | 2026-03-24 05:38:29.990888 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:38:29.990899 | orchestrator | Tuesday 24 March 2026 05:38:26 +0000 (0:00:01.136) 0:49:07.477 ********* 2026-03-24 05:38:29.990909 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:29.990920 | orchestrator | 2026-03-24 05:38:29.990930 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:38:29.990941 | orchestrator | Tuesday 24 March 2026 05:38:27 +0000 (0:00:01.112) 0:49:08.590 ********* 2026-03-24 05:38:29.990952 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:29.990962 | orchestrator | 2026-03-24 05:38:29.990973 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:38:29.990984 | orchestrator | Tuesday 24 March 2026 05:38:28 +0000 (0:00:01.135) 0:49:09.725 ********* 2026-03-24 05:38:29.991008 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.797190 | orchestrator | 2026-03-24 05:38:53.797378 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:38:53.797400 | orchestrator | Tuesday 24 March 2026 05:38:29 +0000 (0:00:01.152) 0:49:10.878 ********* 2026-03-24 05:38:53.797412 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:53.797424 | orchestrator | 2026-03-24 05:38:53.797436 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:38:53.797447 | orchestrator | Tuesday 24 March 2026 05:38:31 +0000 (0:00:01.122) 0:49:12.000 ********* 2026-03-24 05:38:53.797458 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:38:53.797469 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:38:53.797480 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:38:53.797491 | orchestrator | 2026-03-24 05:38:53.797502 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:38:53.797513 | orchestrator | Tuesday 24 March 2026 05:38:32 +0000 (0:00:01.659) 0:49:13.660 ********* 2026-03-24 05:38:53.797523 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:53.797534 | orchestrator | 2026-03-24 05:38:53.797545 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:38:53.797556 | orchestrator | Tuesday 24 March 2026 05:38:33 +0000 (0:00:01.230) 0:49:14.891 ********* 2026-03-24 05:38:53.797566 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:38:53.797577 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:38:53.797587 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:38:53.797598 | orchestrator | 2026-03-24 05:38:53.797609 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:38:53.797620 | orchestrator | Tuesday 24 March 2026 05:38:36 +0000 (0:00:02.916) 0:49:17.808 ********* 2026-03-24 05:38:53.797631 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 05:38:53.797642 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 05:38:53.797653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 05:38:53.797664 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.797675 | orchestrator | 2026-03-24 05:38:53.797686 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:38:53.797696 | orchestrator | Tuesday 24 March 2026 05:38:38 +0000 (0:00:01.404) 0:49:19.212 ********* 2026-03-24 05:38:53.797709 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:38:53.797751 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:38:53.797779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:38:53.797792 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.797805 | orchestrator | 2026-03-24 05:38:53.797818 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:38:53.797831 | orchestrator | Tuesday 24 March 2026 05:38:39 +0000 (0:00:01.627) 0:49:20.840 ********* 2026-03-24 05:38:53.797845 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:38:53.797861 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:38:53.797873 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:38:53.797884 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.797895 | orchestrator | 2026-03-24 05:38:53.797906 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:38:53.797935 | orchestrator | Tuesday 24 March 2026 05:38:41 +0000 (0:00:01.197) 0:49:22.038 ********* 2026-03-24 05:38:53.797949 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:38:34.538095', 'end': '2026-03-24 05:38:34.585955', 'delta': '0:00:00.047860', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:38:53.797964 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:38:35.152082', 'end': '2026-03-24 05:38:35.203860', 'delta': '0:00:00.051778', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:38:53.797983 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:38:35.727129', 'end': '2026-03-24 05:38:35.775963', 'delta': '0:00:00.048834', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:38:53.797995 | orchestrator | 2026-03-24 05:38:53.798012 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:38:53.798096 | orchestrator | Tuesday 24 March 2026 05:38:42 +0000 (0:00:01.231) 0:49:23.269 ********* 2026-03-24 05:38:53.798108 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:53.798118 | orchestrator | 2026-03-24 05:38:53.798129 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:38:53.798140 | orchestrator | Tuesday 24 March 2026 05:38:43 +0000 (0:00:01.243) 0:49:24.513 ********* 2026-03-24 05:38:53.798150 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.798161 | orchestrator | 2026-03-24 05:38:53.798172 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:38:53.798182 | orchestrator | Tuesday 24 March 2026 05:38:44 +0000 (0:00:01.261) 0:49:25.774 ********* 2026-03-24 05:38:53.798193 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:53.798204 | orchestrator | 2026-03-24 05:38:53.798214 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:38:53.798225 | orchestrator | Tuesday 24 March 2026 05:38:46 +0000 (0:00:01.154) 0:49:26.929 ********* 2026-03-24 05:38:53.798235 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:53.798246 | orchestrator | 2026-03-24 05:38:53.798257 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:38:53.798267 | orchestrator | Tuesday 24 March 2026 05:38:48 +0000 (0:00:01.998) 0:49:28.927 ********* 2026-03-24 05:38:53.798278 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:38:53.798289 | orchestrator | 2026-03-24 05:38:53.798327 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:38:53.798338 | orchestrator | Tuesday 24 March 2026 05:38:49 +0000 (0:00:01.116) 0:49:30.044 ********* 2026-03-24 05:38:53.798349 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.798360 | orchestrator | 2026-03-24 05:38:53.798371 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:38:53.798381 | orchestrator | Tuesday 24 March 2026 05:38:50 +0000 (0:00:01.125) 0:49:31.169 ********* 2026-03-24 05:38:53.798392 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.798403 | orchestrator | 2026-03-24 05:38:53.798414 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:38:53.798424 | orchestrator | Tuesday 24 March 2026 05:38:51 +0000 (0:00:01.261) 0:49:32.431 ********* 2026-03-24 05:38:53.798435 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:38:53.798446 | orchestrator | 2026-03-24 05:38:53.798456 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:38:53.798467 | orchestrator | Tuesday 24 March 2026 05:38:52 +0000 (0:00:01.113) 0:49:33.545 ********* 2026-03-24 05:38:53.798486 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:02.159536 | orchestrator | 2026-03-24 05:39:02.159632 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:39:02.159644 | orchestrator | Tuesday 24 March 2026 05:38:53 +0000 (0:00:01.139) 0:49:34.685 ********* 2026-03-24 05:39:02.159653 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:02.159682 | orchestrator | 2026-03-24 05:39:02.159691 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:39:02.159699 | orchestrator | Tuesday 24 March 2026 05:38:55 +0000 (0:00:01.237) 0:49:35.922 ********* 2026-03-24 05:39:02.159707 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:02.159715 | orchestrator | 2026-03-24 05:39:02.159723 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:39:02.159731 | orchestrator | Tuesday 24 March 2026 05:38:56 +0000 (0:00:01.181) 0:49:37.104 ********* 2026-03-24 05:39:02.159739 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:02.159746 | orchestrator | 2026-03-24 05:39:02.159754 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:39:02.159762 | orchestrator | Tuesday 24 March 2026 05:38:57 +0000 (0:00:01.102) 0:49:38.207 ********* 2026-03-24 05:39:02.159770 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:02.159778 | orchestrator | 2026-03-24 05:39:02.159786 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:39:02.159794 | orchestrator | Tuesday 24 March 2026 05:38:58 +0000 (0:00:01.114) 0:49:39.321 ********* 2026-03-24 05:39:02.159802 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:02.159810 | orchestrator | 2026-03-24 05:39:02.159831 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:39:02.159845 | orchestrator | Tuesday 24 March 2026 05:38:59 +0000 (0:00:01.104) 0:49:40.426 ********* 2026-03-24 05:39:02.159860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.159875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.159904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.159921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:39:02.159937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.159952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.159983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.160001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:39:02.160012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.160020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:39:02.160028 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:02.160036 | orchestrator | 2026-03-24 05:39:02.160044 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:39:02.160052 | orchestrator | Tuesday 24 March 2026 05:39:00 +0000 (0:00:01.370) 0:49:41.796 ********* 2026-03-24 05:39:02.160067 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:02.160088 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751604 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751748 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751786 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751800 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751835 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751874 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2db98c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1', 'scsi-SQEMU_QEMU_HARDDISK_2db98c7e-0495-471f-a090-f7de28c85f93-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751895 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751908 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:39:09.751929 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:09.751943 | orchestrator | 2026-03-24 05:39:09.751956 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:39:09.751968 | orchestrator | Tuesday 24 March 2026 05:39:02 +0000 (0:00:01.259) 0:49:43.055 ********* 2026-03-24 05:39:09.751979 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:39:09.751991 | orchestrator | 2026-03-24 05:39:09.752002 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:39:09.752012 | orchestrator | Tuesday 24 March 2026 05:39:03 +0000 (0:00:01.502) 0:49:44.558 ********* 2026-03-24 05:39:09.752023 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:39:09.752033 | orchestrator | 2026-03-24 05:39:09.752044 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:39:09.752055 | orchestrator | Tuesday 24 March 2026 05:39:04 +0000 (0:00:01.118) 0:49:45.676 ********* 2026-03-24 05:39:09.752066 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:39:09.752076 | orchestrator | 2026-03-24 05:39:09.752087 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:39:09.752100 | orchestrator | Tuesday 24 March 2026 05:39:06 +0000 (0:00:01.489) 0:49:47.166 ********* 2026-03-24 05:39:09.752112 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:09.752125 | orchestrator | 2026-03-24 05:39:09.752138 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:39:09.752151 | orchestrator | Tuesday 24 March 2026 05:39:07 +0000 (0:00:01.137) 0:49:48.304 ********* 2026-03-24 05:39:09.752164 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:09.752177 | orchestrator | 2026-03-24 05:39:09.752190 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:39:09.752202 | orchestrator | Tuesday 24 March 2026 05:39:08 +0000 (0:00:01.210) 0:49:49.514 ********* 2026-03-24 05:39:09.752214 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:39:09.752227 | orchestrator | 2026-03-24 05:39:09.752240 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:39:09.752260 | orchestrator | Tuesday 24 March 2026 05:39:09 +0000 (0:00:01.131) 0:49:50.646 ********* 2026-03-24 05:40:03.086301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:40:03.086456 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-24 05:40:03.086467 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-24 05:40:03.086474 | orchestrator | 2026-03-24 05:40:03.086481 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:40:03.086489 | orchestrator | Tuesday 24 March 2026 05:39:11 +0000 (0:00:01.708) 0:49:52.354 ********* 2026-03-24 05:40:03.086496 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-24 05:40:03.086503 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-24 05:40:03.086510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-24 05:40:03.086516 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:40:03.086523 | orchestrator | 2026-03-24 05:40:03.086530 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:40:03.086536 | orchestrator | Tuesday 24 March 2026 05:39:12 +0000 (0:00:01.179) 0:49:53.533 ********* 2026-03-24 05:40:03.086542 | orchestrator | skipping: [testbed-node-0] 2026-03-24 05:40:03.086549 | orchestrator | 2026-03-24 05:40:03.086555 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:40:03.086561 | orchestrator | Tuesday 24 March 2026 05:39:13 +0000 (0:00:01.122) 0:49:54.655 ********* 2026-03-24 05:40:03.086567 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:40:03.086574 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:40:03.086581 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:40:03.086605 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:40:03.086612 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:40:03.086618 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:40:03.086624 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:40:03.086630 | orchestrator | 2026-03-24 05:40:03.086636 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:40:03.086643 | orchestrator | Tuesday 24 March 2026 05:39:15 +0000 (0:00:02.183) 0:49:56.838 ********* 2026-03-24 05:40:03.086649 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-24 05:40:03.086666 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:40:03.086673 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:40:03.086679 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:40:03.086685 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:40:03.086691 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:40:03.086697 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:40:03.086704 | orchestrator | 2026-03-24 05:40:03.086710 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-03-24 05:40:03.086716 | orchestrator | Tuesday 24 March 2026 05:39:18 +0000 (0:00:02.548) 0:49:59.387 ********* 2026-03-24 05:40:03.086722 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:40:03.086728 | orchestrator | 2026-03-24 05:40:03.086735 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-03-24 05:40:03.086741 | orchestrator | Tuesday 24 March 2026 05:39:21 +0000 (0:00:03.140) 0:50:02.528 ********* 2026-03-24 05:40:03.086747 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:40:03.086753 | orchestrator | 2026-03-24 05:40:03.086759 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-03-24 05:40:03.086766 | orchestrator | Tuesday 24 March 2026 05:39:24 +0000 (0:00:03.135) 0:50:05.664 ********* 2026-03-24 05:40:03.086772 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:40:03.086779 | orchestrator | 2026-03-24 05:40:03.086785 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-03-24 05:40:03.086791 | orchestrator | Tuesday 24 March 2026 05:39:26 +0000 (0:00:02.148) 0:50:07.812 ********* 2026-03-24 05:40:03.086800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4639', 'value': {'gid': 4639, 'name': 'testbed-node-3', 'rank': 0, 'incarnation': 3, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.13:6817/1011302710', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.13:6816', 'nonce': 1011302710}, {'type': 'v1', 'addr': '192.168.16.13:6817', 'nonce': 1011302710}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-03-24 05:40:03.086810 | orchestrator | 2026-03-24 05:40:03.086816 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-03-24 05:40:03.086822 | orchestrator | Tuesday 24 March 2026 05:39:28 +0000 (0:00:01.175) 0:50:08.988 ********* 2026-03-24 05:40:03.086841 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-3) 2026-03-24 05:40:03.086849 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-24 05:40:03.086863 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-24 05:40:03.086871 | orchestrator | 2026-03-24 05:40:03.086878 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-03-24 05:40:03.086886 | orchestrator | Tuesday 24 March 2026 05:39:29 +0000 (0:00:01.884) 0:50:10.872 ********* 2026-03-24 05:40:03.086893 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-03-24 05:40:03.086900 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-03-24 05:40:03.086907 | orchestrator | 2026-03-24 05:40:03.086915 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-03-24 05:40:03.086922 | orchestrator | Tuesday 24 March 2026 05:39:31 +0000 (0:00:01.457) 0:50:12.329 ********* 2026-03-24 05:40:03.086929 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:40:03.086936 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:40:03.086944 | orchestrator | 2026-03-24 05:40:03.086951 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-03-24 05:40:03.086959 | orchestrator | Tuesday 24 March 2026 05:39:41 +0000 (0:00:10.098) 0:50:22.428 ********* 2026-03-24 05:40:03.086966 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:40:03.086974 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:40:03.086981 | orchestrator | 2026-03-24 05:40:03.086988 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-03-24 05:40:03.086995 | orchestrator | Tuesday 24 March 2026 05:39:45 +0000 (0:00:03.917) 0:50:26.346 ********* 2026-03-24 05:40:03.087007 | orchestrator | ok: [testbed-node-0] 2026-03-24 05:40:03.087018 | orchestrator | 2026-03-24 05:40:03.087028 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-03-24 05:40:03.087039 | orchestrator | Tuesday 24 March 2026 05:39:47 +0000 (0:00:02.094) 0:50:28.440 ********* 2026-03-24 05:40:03.087051 | orchestrator | changed: [testbed-node-0] 2026-03-24 05:40:03.087062 | orchestrator | 2026-03-24 05:40:03.087073 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-03-24 05:40:03.087084 | orchestrator | 2026-03-24 05:40:03.087094 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:40:03.087104 | orchestrator | Tuesday 24 March 2026 05:39:49 +0000 (0:00:01.505) 0:50:29.945 ********* 2026-03-24 05:40:03.087111 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-24 05:40:03.087117 | orchestrator | 2026-03-24 05:40:03.087123 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:40:03.087129 | orchestrator | Tuesday 24 March 2026 05:39:50 +0000 (0:00:01.105) 0:50:31.051 ********* 2026-03-24 05:40:03.087135 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087141 | orchestrator | 2026-03-24 05:40:03.087148 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:40:03.087154 | orchestrator | Tuesday 24 March 2026 05:39:51 +0000 (0:00:01.447) 0:50:32.499 ********* 2026-03-24 05:40:03.087160 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087166 | orchestrator | 2026-03-24 05:40:03.087172 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:40:03.087178 | orchestrator | Tuesday 24 March 2026 05:39:52 +0000 (0:00:01.132) 0:50:33.631 ********* 2026-03-24 05:40:03.087184 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087190 | orchestrator | 2026-03-24 05:40:03.087196 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:40:03.087203 | orchestrator | Tuesday 24 March 2026 05:39:54 +0000 (0:00:01.493) 0:50:35.125 ********* 2026-03-24 05:40:03.087209 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087215 | orchestrator | 2026-03-24 05:40:03.087221 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:40:03.087233 | orchestrator | Tuesday 24 March 2026 05:39:55 +0000 (0:00:01.332) 0:50:36.458 ********* 2026-03-24 05:40:03.087239 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087245 | orchestrator | 2026-03-24 05:40:03.087251 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:40:03.087257 | orchestrator | Tuesday 24 March 2026 05:39:56 +0000 (0:00:01.142) 0:50:37.601 ********* 2026-03-24 05:40:03.087263 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087269 | orchestrator | 2026-03-24 05:40:03.087276 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:40:03.087282 | orchestrator | Tuesday 24 March 2026 05:39:57 +0000 (0:00:01.138) 0:50:38.740 ********* 2026-03-24 05:40:03.087288 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:03.087294 | orchestrator | 2026-03-24 05:40:03.087300 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:40:03.087306 | orchestrator | Tuesday 24 March 2026 05:39:58 +0000 (0:00:01.145) 0:50:39.886 ********* 2026-03-24 05:40:03.087312 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087341 | orchestrator | 2026-03-24 05:40:03.087348 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:40:03.087354 | orchestrator | Tuesday 24 March 2026 05:40:00 +0000 (0:00:01.146) 0:50:41.032 ********* 2026-03-24 05:40:03.087360 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:40:03.087366 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:40:03.087372 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:40:03.087379 | orchestrator | 2026-03-24 05:40:03.087386 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:40:03.087396 | orchestrator | Tuesday 24 March 2026 05:40:01 +0000 (0:00:01.661) 0:50:42.694 ********* 2026-03-24 05:40:03.087405 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:03.087415 | orchestrator | 2026-03-24 05:40:03.087432 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:40:27.730811 | orchestrator | Tuesday 24 March 2026 05:40:03 +0000 (0:00:01.279) 0:50:43.973 ********* 2026-03-24 05:40:27.730907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:40:27.730918 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:40:27.730925 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:40:27.730932 | orchestrator | 2026-03-24 05:40:27.730939 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:40:27.730946 | orchestrator | Tuesday 24 March 2026 05:40:05 +0000 (0:00:02.891) 0:50:46.865 ********* 2026-03-24 05:40:27.730953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 05:40:27.730960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 05:40:27.730966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 05:40:27.730973 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.730979 | orchestrator | 2026-03-24 05:40:27.730986 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:40:27.730992 | orchestrator | Tuesday 24 March 2026 05:40:07 +0000 (0:00:01.367) 0:50:48.232 ********* 2026-03-24 05:40:27.731001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:40:27.731010 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:40:27.731016 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:40:27.731045 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731052 | orchestrator | 2026-03-24 05:40:27.731070 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:40:27.731077 | orchestrator | Tuesday 24 March 2026 05:40:09 +0000 (0:00:01.897) 0:50:50.129 ********* 2026-03-24 05:40:27.731086 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:27.731095 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:27.731102 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:27.731109 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731115 | orchestrator | 2026-03-24 05:40:27.731122 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:40:27.731128 | orchestrator | Tuesday 24 March 2026 05:40:10 +0000 (0:00:01.156) 0:50:51.285 ********* 2026-03-24 05:40:27.731137 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:40:03.582953', 'end': '2026-03-24 05:40:03.633512', 'delta': '0:00:00.050559', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:40:27.731161 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:40:04.168103', 'end': '2026-03-24 05:40:04.227836', 'delta': '0:00:00.059733', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:40:27.731168 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:40:04.770866', 'end': '2026-03-24 05:40:04.823403', 'delta': '0:00:00.052537', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:40:27.731180 | orchestrator | 2026-03-24 05:40:27.731187 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:40:27.731194 | orchestrator | Tuesday 24 March 2026 05:40:11 +0000 (0:00:01.190) 0:50:52.476 ********* 2026-03-24 05:40:27.731200 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:27.731207 | orchestrator | 2026-03-24 05:40:27.731217 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:40:27.731225 | orchestrator | Tuesday 24 March 2026 05:40:12 +0000 (0:00:01.267) 0:50:53.743 ********* 2026-03-24 05:40:27.731231 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731238 | orchestrator | 2026-03-24 05:40:27.731244 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:40:27.731250 | orchestrator | Tuesday 24 March 2026 05:40:14 +0000 (0:00:01.560) 0:50:55.304 ********* 2026-03-24 05:40:27.731256 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:27.731263 | orchestrator | 2026-03-24 05:40:27.731269 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:40:27.731275 | orchestrator | Tuesday 24 March 2026 05:40:15 +0000 (0:00:01.162) 0:50:56.467 ********* 2026-03-24 05:40:27.731281 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:40:27.731288 | orchestrator | 2026-03-24 05:40:27.731294 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:40:27.731300 | orchestrator | Tuesday 24 March 2026 05:40:17 +0000 (0:00:01.955) 0:50:58.422 ********* 2026-03-24 05:40:27.731307 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:27.731313 | orchestrator | 2026-03-24 05:40:27.731320 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:40:27.731370 | orchestrator | Tuesday 24 March 2026 05:40:18 +0000 (0:00:01.132) 0:50:59.555 ********* 2026-03-24 05:40:27.731377 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731384 | orchestrator | 2026-03-24 05:40:27.731390 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:40:27.731397 | orchestrator | Tuesday 24 March 2026 05:40:19 +0000 (0:00:01.088) 0:51:00.643 ********* 2026-03-24 05:40:27.731404 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731410 | orchestrator | 2026-03-24 05:40:27.731417 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:40:27.731424 | orchestrator | Tuesday 24 March 2026 05:40:20 +0000 (0:00:01.220) 0:51:01.864 ********* 2026-03-24 05:40:27.731430 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731436 | orchestrator | 2026-03-24 05:40:27.731442 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:40:27.731449 | orchestrator | Tuesday 24 March 2026 05:40:22 +0000 (0:00:01.099) 0:51:02.963 ********* 2026-03-24 05:40:27.731456 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731462 | orchestrator | 2026-03-24 05:40:27.731469 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:40:27.731475 | orchestrator | Tuesday 24 March 2026 05:40:23 +0000 (0:00:01.123) 0:51:04.086 ********* 2026-03-24 05:40:27.731481 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:27.731487 | orchestrator | 2026-03-24 05:40:27.731492 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:40:27.731498 | orchestrator | Tuesday 24 March 2026 05:40:24 +0000 (0:00:01.122) 0:51:05.209 ********* 2026-03-24 05:40:27.731504 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731510 | orchestrator | 2026-03-24 05:40:27.731517 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:40:27.731523 | orchestrator | Tuesday 24 March 2026 05:40:25 +0000 (0:00:01.140) 0:51:06.350 ********* 2026-03-24 05:40:27.731535 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:27.731541 | orchestrator | 2026-03-24 05:40:27.731547 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:40:27.731553 | orchestrator | Tuesday 24 March 2026 05:40:26 +0000 (0:00:01.176) 0:51:07.526 ********* 2026-03-24 05:40:27.731559 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:27.731565 | orchestrator | 2026-03-24 05:40:29.110533 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:40:29.110613 | orchestrator | Tuesday 24 March 2026 05:40:27 +0000 (0:00:01.090) 0:51:08.617 ********* 2026-03-24 05:40:29.110624 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:29.110632 | orchestrator | 2026-03-24 05:40:29.110640 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:40:29.110647 | orchestrator | Tuesday 24 March 2026 05:40:28 +0000 (0:00:01.146) 0:51:09.763 ********* 2026-03-24 05:40:29.110656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:29.110667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}})  2026-03-24 05:40:29.110689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:40:29.110698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}})  2026-03-24 05:40:29.110706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:29.110730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:29.110750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:40:29.110758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:29.110765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:40:29.110776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:29.110784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}})  2026-03-24 05:40:29.110791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}})  2026-03-24 05:40:29.110806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:29.110824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:40:30.808548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:30.808632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:40:30.808644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:40:30.808671 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:30.808680 | orchestrator | 2026-03-24 05:40:30.808687 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:40:30.808695 | orchestrator | Tuesday 24 March 2026 05:40:30 +0000 (0:00:01.340) 0:51:11.103 ********* 2026-03-24 05:40:30.808703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808721 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808748 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:30.808806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171441 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171599 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171749 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:40:36.171762 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:40:36.171775 | orchestrator | 2026-03-24 05:40:36.171787 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:40:36.171800 | orchestrator | Tuesday 24 March 2026 05:40:31 +0000 (0:00:01.776) 0:51:12.880 ********* 2026-03-24 05:40:36.171829 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:36.171853 | orchestrator | 2026-03-24 05:40:36.171865 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:40:36.171876 | orchestrator | Tuesday 24 March 2026 05:40:33 +0000 (0:00:01.552) 0:51:14.433 ********* 2026-03-24 05:40:36.171887 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:36.171899 | orchestrator | 2026-03-24 05:40:36.171912 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:40:36.171924 | orchestrator | Tuesday 24 March 2026 05:40:34 +0000 (0:00:01.139) 0:51:15.572 ********* 2026-03-24 05:40:36.171936 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:40:36.171948 | orchestrator | 2026-03-24 05:40:36.171966 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:40:36.171987 | orchestrator | Tuesday 24 March 2026 05:40:36 +0000 (0:00:01.491) 0:51:17.063 ********* 2026-03-24 05:41:16.250758 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.250880 | orchestrator | 2026-03-24 05:41:16.250893 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:41:16.250901 | orchestrator | Tuesday 24 March 2026 05:40:37 +0000 (0:00:01.119) 0:51:18.183 ********* 2026-03-24 05:41:16.250907 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.250913 | orchestrator | 2026-03-24 05:41:16.250919 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:41:16.250926 | orchestrator | Tuesday 24 March 2026 05:40:38 +0000 (0:00:01.223) 0:51:19.407 ********* 2026-03-24 05:41:16.250932 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.250939 | orchestrator | 2026-03-24 05:41:16.250945 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:41:16.250952 | orchestrator | Tuesday 24 March 2026 05:40:39 +0000 (0:00:01.143) 0:51:20.550 ********* 2026-03-24 05:41:16.250959 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-24 05:41:16.250966 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-24 05:41:16.250972 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-24 05:41:16.250978 | orchestrator | 2026-03-24 05:41:16.250985 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:41:16.250991 | orchestrator | Tuesday 24 March 2026 05:40:41 +0000 (0:00:01.640) 0:51:22.191 ********* 2026-03-24 05:41:16.250997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 05:41:16.251004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 05:41:16.251010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 05:41:16.251017 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251023 | orchestrator | 2026-03-24 05:41:16.251029 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:41:16.251036 | orchestrator | Tuesday 24 March 2026 05:40:42 +0000 (0:00:01.113) 0:51:23.305 ********* 2026-03-24 05:41:16.251043 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-24 05:41:16.251050 | orchestrator | 2026-03-24 05:41:16.251058 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:41:16.251066 | orchestrator | Tuesday 24 March 2026 05:40:43 +0000 (0:00:01.099) 0:51:24.405 ********* 2026-03-24 05:41:16.251072 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251079 | orchestrator | 2026-03-24 05:41:16.251085 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:41:16.251092 | orchestrator | Tuesday 24 March 2026 05:40:44 +0000 (0:00:01.101) 0:51:25.507 ********* 2026-03-24 05:41:16.251098 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251104 | orchestrator | 2026-03-24 05:41:16.251110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:41:16.251117 | orchestrator | Tuesday 24 March 2026 05:40:45 +0000 (0:00:01.168) 0:51:26.675 ********* 2026-03-24 05:41:16.251123 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251129 | orchestrator | 2026-03-24 05:41:16.251135 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:41:16.251141 | orchestrator | Tuesday 24 March 2026 05:40:46 +0000 (0:00:01.105) 0:51:27.781 ********* 2026-03-24 05:41:16.251147 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:41:16.251154 | orchestrator | 2026-03-24 05:41:16.251161 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:41:16.251167 | orchestrator | Tuesday 24 March 2026 05:40:48 +0000 (0:00:01.206) 0:51:28.987 ********* 2026-03-24 05:41:16.251173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:41:16.251179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:41:16.251186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:41:16.251192 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251198 | orchestrator | 2026-03-24 05:41:16.251229 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:41:16.251237 | orchestrator | Tuesday 24 March 2026 05:40:49 +0000 (0:00:01.407) 0:51:30.395 ********* 2026-03-24 05:41:16.251244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:41:16.251250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:41:16.251257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:41:16.251263 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251270 | orchestrator | 2026-03-24 05:41:16.251278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:41:16.251284 | orchestrator | Tuesday 24 March 2026 05:40:50 +0000 (0:00:01.354) 0:51:31.749 ********* 2026-03-24 05:41:16.251290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:41:16.251296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:41:16.251303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:41:16.251308 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251315 | orchestrator | 2026-03-24 05:41:16.251321 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:41:16.251327 | orchestrator | Tuesday 24 March 2026 05:40:52 +0000 (0:00:01.376) 0:51:33.126 ********* 2026-03-24 05:41:16.251333 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:41:16.251356 | orchestrator | 2026-03-24 05:41:16.251363 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:41:16.251370 | orchestrator | Tuesday 24 March 2026 05:40:53 +0000 (0:00:01.161) 0:51:34.288 ********* 2026-03-24 05:41:16.251377 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:41:16.251383 | orchestrator | 2026-03-24 05:41:16.251390 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:41:16.251410 | orchestrator | Tuesday 24 March 2026 05:40:54 +0000 (0:00:01.328) 0:51:35.616 ********* 2026-03-24 05:41:16.251433 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:41:16.251440 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:41:16.251447 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:41:16.251454 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 05:41:16.251460 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:41:16.251466 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:41:16.251473 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:41:16.251479 | orchestrator | 2026-03-24 05:41:16.251485 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:41:16.251492 | orchestrator | Tuesday 24 March 2026 05:40:56 +0000 (0:00:02.234) 0:51:37.851 ********* 2026-03-24 05:41:16.251498 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:41:16.251504 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:41:16.251510 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:41:16.251517 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 05:41:16.251531 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:41:16.251537 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:41:16.251543 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:41:16.251550 | orchestrator | 2026-03-24 05:41:16.251556 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-03-24 05:41:16.251563 | orchestrator | Tuesday 24 March 2026 05:40:59 +0000 (0:00:02.612) 0:51:40.464 ********* 2026-03-24 05:41:16.251577 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251583 | orchestrator | 2026-03-24 05:41:16.251590 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:41:16.251597 | orchestrator | Tuesday 24 March 2026 05:41:00 +0000 (0:00:01.116) 0:51:41.580 ********* 2026-03-24 05:41:16.251603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-24 05:41:16.251610 | orchestrator | 2026-03-24 05:41:16.251616 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:41:16.251622 | orchestrator | Tuesday 24 March 2026 05:41:01 +0000 (0:00:01.128) 0:51:42.709 ********* 2026-03-24 05:41:16.251628 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-24 05:41:16.251634 | orchestrator | 2026-03-24 05:41:16.251640 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:41:16.251647 | orchestrator | Tuesday 24 March 2026 05:41:02 +0000 (0:00:01.160) 0:51:43.870 ********* 2026-03-24 05:41:16.251653 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251660 | orchestrator | 2026-03-24 05:41:16.251666 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:41:16.251673 | orchestrator | Tuesday 24 March 2026 05:41:04 +0000 (0:00:01.114) 0:51:44.984 ********* 2026-03-24 05:41:16.251679 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:41:16.251686 | orchestrator | 2026-03-24 05:41:16.251692 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:41:16.251698 | orchestrator | Tuesday 24 March 2026 05:41:05 +0000 (0:00:01.517) 0:51:46.501 ********* 2026-03-24 05:41:16.251705 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:41:16.251711 | orchestrator | 2026-03-24 05:41:16.251717 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:41:16.251724 | orchestrator | Tuesday 24 March 2026 05:41:07 +0000 (0:00:01.538) 0:51:48.039 ********* 2026-03-24 05:41:16.251730 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:41:16.251736 | orchestrator | 2026-03-24 05:41:16.251742 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:41:16.251749 | orchestrator | Tuesday 24 March 2026 05:41:08 +0000 (0:00:01.490) 0:51:49.530 ********* 2026-03-24 05:41:16.251755 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251762 | orchestrator | 2026-03-24 05:41:16.251767 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:41:16.251774 | orchestrator | Tuesday 24 March 2026 05:41:09 +0000 (0:00:01.137) 0:51:50.667 ********* 2026-03-24 05:41:16.251780 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251786 | orchestrator | 2026-03-24 05:41:16.251793 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:41:16.251799 | orchestrator | Tuesday 24 March 2026 05:41:10 +0000 (0:00:01.156) 0:51:51.824 ********* 2026-03-24 05:41:16.251805 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251812 | orchestrator | 2026-03-24 05:41:16.251818 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:41:16.251825 | orchestrator | Tuesday 24 March 2026 05:41:12 +0000 (0:00:01.114) 0:51:52.938 ********* 2026-03-24 05:41:16.251831 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:41:16.251837 | orchestrator | 2026-03-24 05:41:16.251843 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:41:16.251850 | orchestrator | Tuesday 24 March 2026 05:41:13 +0000 (0:00:01.527) 0:51:54.465 ********* 2026-03-24 05:41:16.251856 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:41:16.251862 | orchestrator | 2026-03-24 05:41:16.251868 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:41:16.251875 | orchestrator | Tuesday 24 March 2026 05:41:15 +0000 (0:00:01.540) 0:51:56.006 ********* 2026-03-24 05:41:16.251886 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:41:16.251892 | orchestrator | 2026-03-24 05:41:16.251898 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:41:16.251916 | orchestrator | Tuesday 24 March 2026 05:41:16 +0000 (0:00:01.133) 0:51:57.139 ********* 2026-03-24 05:42:04.188881 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.188963 | orchestrator | 2026-03-24 05:42:04.188971 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:42:04.188976 | orchestrator | Tuesday 24 March 2026 05:41:17 +0000 (0:00:01.177) 0:51:58.317 ********* 2026-03-24 05:42:04.188981 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.188985 | orchestrator | 2026-03-24 05:42:04.188989 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:42:04.188994 | orchestrator | Tuesday 24 March 2026 05:41:18 +0000 (0:00:01.134) 0:51:59.452 ********* 2026-03-24 05:42:04.188997 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189001 | orchestrator | 2026-03-24 05:42:04.189005 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:42:04.189009 | orchestrator | Tuesday 24 March 2026 05:41:19 +0000 (0:00:01.125) 0:52:00.577 ********* 2026-03-24 05:42:04.189013 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189017 | orchestrator | 2026-03-24 05:42:04.189021 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:42:04.189024 | orchestrator | Tuesday 24 March 2026 05:41:20 +0000 (0:00:01.143) 0:52:01.721 ********* 2026-03-24 05:42:04.189028 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189032 | orchestrator | 2026-03-24 05:42:04.189036 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:42:04.189039 | orchestrator | Tuesday 24 March 2026 05:41:21 +0000 (0:00:01.122) 0:52:02.844 ********* 2026-03-24 05:42:04.189043 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189047 | orchestrator | 2026-03-24 05:42:04.189051 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:42:04.189055 | orchestrator | Tuesday 24 March 2026 05:41:23 +0000 (0:00:01.125) 0:52:03.969 ********* 2026-03-24 05:42:04.189058 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189062 | orchestrator | 2026-03-24 05:42:04.189066 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:42:04.189070 | orchestrator | Tuesday 24 March 2026 05:41:24 +0000 (0:00:01.137) 0:52:05.107 ********* 2026-03-24 05:42:04.189085 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189089 | orchestrator | 2026-03-24 05:42:04.189093 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:42:04.189097 | orchestrator | Tuesday 24 March 2026 05:41:25 +0000 (0:00:01.267) 0:52:06.375 ********* 2026-03-24 05:42:04.189101 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189104 | orchestrator | 2026-03-24 05:42:04.189108 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:42:04.189112 | orchestrator | Tuesday 24 March 2026 05:41:26 +0000 (0:00:01.170) 0:52:07.546 ********* 2026-03-24 05:42:04.189116 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189120 | orchestrator | 2026-03-24 05:42:04.189123 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:42:04.189127 | orchestrator | Tuesday 24 March 2026 05:41:27 +0000 (0:00:01.090) 0:52:08.637 ********* 2026-03-24 05:42:04.189131 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189135 | orchestrator | 2026-03-24 05:42:04.189139 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:42:04.189142 | orchestrator | Tuesday 24 March 2026 05:41:28 +0000 (0:00:01.113) 0:52:09.750 ********* 2026-03-24 05:42:04.189146 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189150 | orchestrator | 2026-03-24 05:42:04.189154 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:42:04.189158 | orchestrator | Tuesday 24 March 2026 05:41:29 +0000 (0:00:01.102) 0:52:10.852 ********* 2026-03-24 05:42:04.189161 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189165 | orchestrator | 2026-03-24 05:42:04.189169 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:42:04.189191 | orchestrator | Tuesday 24 March 2026 05:41:31 +0000 (0:00:01.105) 0:52:11.957 ********* 2026-03-24 05:42:04.189195 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189199 | orchestrator | 2026-03-24 05:42:04.189203 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:42:04.189206 | orchestrator | Tuesday 24 March 2026 05:41:32 +0000 (0:00:01.093) 0:52:13.051 ********* 2026-03-24 05:42:04.189210 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189214 | orchestrator | 2026-03-24 05:42:04.189218 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:42:04.189221 | orchestrator | Tuesday 24 March 2026 05:41:33 +0000 (0:00:01.099) 0:52:14.151 ********* 2026-03-24 05:42:04.189225 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189229 | orchestrator | 2026-03-24 05:42:04.189233 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:42:04.189237 | orchestrator | Tuesday 24 March 2026 05:41:34 +0000 (0:00:01.105) 0:52:15.256 ********* 2026-03-24 05:42:04.189241 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189245 | orchestrator | 2026-03-24 05:42:04.189248 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:42:04.189252 | orchestrator | Tuesday 24 March 2026 05:41:35 +0000 (0:00:01.092) 0:52:16.349 ********* 2026-03-24 05:42:04.189256 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189260 | orchestrator | 2026-03-24 05:42:04.189263 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:42:04.189267 | orchestrator | Tuesday 24 March 2026 05:41:36 +0000 (0:00:01.096) 0:52:17.445 ********* 2026-03-24 05:42:04.189271 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189275 | orchestrator | 2026-03-24 05:42:04.189278 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:42:04.189282 | orchestrator | Tuesday 24 March 2026 05:41:37 +0000 (0:00:01.099) 0:52:18.544 ********* 2026-03-24 05:42:04.189286 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189290 | orchestrator | 2026-03-24 05:42:04.189305 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:42:04.189309 | orchestrator | Tuesday 24 March 2026 05:41:38 +0000 (0:00:01.117) 0:52:19.662 ********* 2026-03-24 05:42:04.189312 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189316 | orchestrator | 2026-03-24 05:42:04.189330 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:42:04.189334 | orchestrator | Tuesday 24 March 2026 05:41:39 +0000 (0:00:01.113) 0:52:20.776 ********* 2026-03-24 05:42:04.189338 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189342 | orchestrator | 2026-03-24 05:42:04.189346 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:42:04.189436 | orchestrator | Tuesday 24 March 2026 05:41:41 +0000 (0:00:01.946) 0:52:22.722 ********* 2026-03-24 05:42:04.189443 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189449 | orchestrator | 2026-03-24 05:42:04.189457 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:42:04.189462 | orchestrator | Tuesday 24 March 2026 05:41:44 +0000 (0:00:02.212) 0:52:24.934 ********* 2026-03-24 05:42:04.189466 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-24 05:42:04.189471 | orchestrator | 2026-03-24 05:42:04.189476 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:42:04.189480 | orchestrator | Tuesday 24 March 2026 05:41:45 +0000 (0:00:01.125) 0:52:26.059 ********* 2026-03-24 05:42:04.189484 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189489 | orchestrator | 2026-03-24 05:42:04.189493 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:42:04.189497 | orchestrator | Tuesday 24 March 2026 05:41:46 +0000 (0:00:01.130) 0:52:27.190 ********* 2026-03-24 05:42:04.189502 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189512 | orchestrator | 2026-03-24 05:42:04.189516 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:42:04.189521 | orchestrator | Tuesday 24 March 2026 05:41:47 +0000 (0:00:01.127) 0:52:28.317 ********* 2026-03-24 05:42:04.189525 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:42:04.189530 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:42:04.189534 | orchestrator | 2026-03-24 05:42:04.189538 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:42:04.189542 | orchestrator | Tuesday 24 March 2026 05:41:49 +0000 (0:00:01.882) 0:52:30.199 ********* 2026-03-24 05:42:04.189547 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189551 | orchestrator | 2026-03-24 05:42:04.189555 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:42:04.189559 | orchestrator | Tuesday 24 March 2026 05:41:50 +0000 (0:00:01.515) 0:52:31.715 ********* 2026-03-24 05:42:04.189564 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189568 | orchestrator | 2026-03-24 05:42:04.189572 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:42:04.189577 | orchestrator | Tuesday 24 March 2026 05:41:51 +0000 (0:00:01.130) 0:52:32.846 ********* 2026-03-24 05:42:04.189581 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189585 | orchestrator | 2026-03-24 05:42:04.189590 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:42:04.189594 | orchestrator | Tuesday 24 March 2026 05:41:53 +0000 (0:00:01.163) 0:52:34.009 ********* 2026-03-24 05:42:04.189599 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189603 | orchestrator | 2026-03-24 05:42:04.189607 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:42:04.189612 | orchestrator | Tuesday 24 March 2026 05:41:54 +0000 (0:00:01.121) 0:52:35.130 ********* 2026-03-24 05:42:04.189616 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-24 05:42:04.189620 | orchestrator | 2026-03-24 05:42:04.189625 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:42:04.189629 | orchestrator | Tuesday 24 March 2026 05:41:55 +0000 (0:00:01.198) 0:52:36.329 ********* 2026-03-24 05:42:04.189633 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:04.189638 | orchestrator | 2026-03-24 05:42:04.189642 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:42:04.189647 | orchestrator | Tuesday 24 March 2026 05:41:57 +0000 (0:00:01.747) 0:52:38.076 ********* 2026-03-24 05:42:04.189651 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:42:04.189656 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:42:04.189660 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:42:04.189664 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189669 | orchestrator | 2026-03-24 05:42:04.189673 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:42:04.189677 | orchestrator | Tuesday 24 March 2026 05:41:58 +0000 (0:00:01.131) 0:52:39.207 ********* 2026-03-24 05:42:04.189682 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189686 | orchestrator | 2026-03-24 05:42:04.189690 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:42:04.189695 | orchestrator | Tuesday 24 March 2026 05:41:59 +0000 (0:00:01.121) 0:52:40.329 ********* 2026-03-24 05:42:04.189699 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189703 | orchestrator | 2026-03-24 05:42:04.189707 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:42:04.189712 | orchestrator | Tuesday 24 March 2026 05:42:00 +0000 (0:00:01.189) 0:52:41.519 ********* 2026-03-24 05:42:04.189716 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189720 | orchestrator | 2026-03-24 05:42:04.189725 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:42:04.189732 | orchestrator | Tuesday 24 March 2026 05:42:01 +0000 (0:00:01.170) 0:52:42.689 ********* 2026-03-24 05:42:04.189737 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189741 | orchestrator | 2026-03-24 05:42:04.189750 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:42:04.189755 | orchestrator | Tuesday 24 March 2026 05:42:02 +0000 (0:00:01.179) 0:52:43.869 ********* 2026-03-24 05:42:04.189759 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:04.189763 | orchestrator | 2026-03-24 05:42:04.189772 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:42:54.509053 | orchestrator | Tuesday 24 March 2026 05:42:04 +0000 (0:00:01.203) 0:52:45.073 ********* 2026-03-24 05:42:54.509175 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:54.509195 | orchestrator | 2026-03-24 05:42:54.509212 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:42:54.509228 | orchestrator | Tuesday 24 March 2026 05:42:06 +0000 (0:00:02.642) 0:52:47.715 ********* 2026-03-24 05:42:54.509242 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:54.509256 | orchestrator | 2026-03-24 05:42:54.509270 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:42:54.509284 | orchestrator | Tuesday 24 March 2026 05:42:07 +0000 (0:00:01.124) 0:52:48.840 ********* 2026-03-24 05:42:54.509299 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-24 05:42:54.509313 | orchestrator | 2026-03-24 05:42:54.509324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:42:54.509337 | orchestrator | Tuesday 24 March 2026 05:42:09 +0000 (0:00:01.104) 0:52:49.944 ********* 2026-03-24 05:42:54.509351 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509410 | orchestrator | 2026-03-24 05:42:54.509427 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:42:54.509440 | orchestrator | Tuesday 24 March 2026 05:42:10 +0000 (0:00:01.101) 0:52:51.047 ********* 2026-03-24 05:42:54.509453 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509465 | orchestrator | 2026-03-24 05:42:54.509477 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:42:54.509491 | orchestrator | Tuesday 24 March 2026 05:42:11 +0000 (0:00:01.129) 0:52:52.176 ********* 2026-03-24 05:42:54.509503 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509517 | orchestrator | 2026-03-24 05:42:54.509530 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:42:54.509542 | orchestrator | Tuesday 24 March 2026 05:42:12 +0000 (0:00:01.125) 0:52:53.302 ********* 2026-03-24 05:42:54.509555 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509569 | orchestrator | 2026-03-24 05:42:54.509582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:42:54.509596 | orchestrator | Tuesday 24 March 2026 05:42:13 +0000 (0:00:01.116) 0:52:54.418 ********* 2026-03-24 05:42:54.509608 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509621 | orchestrator | 2026-03-24 05:42:54.509635 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:42:54.509650 | orchestrator | Tuesday 24 March 2026 05:42:14 +0000 (0:00:01.119) 0:52:55.538 ********* 2026-03-24 05:42:54.509664 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509679 | orchestrator | 2026-03-24 05:42:54.509693 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:42:54.509708 | orchestrator | Tuesday 24 March 2026 05:42:15 +0000 (0:00:01.126) 0:52:56.664 ********* 2026-03-24 05:42:54.509722 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509736 | orchestrator | 2026-03-24 05:42:54.509750 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:42:54.509764 | orchestrator | Tuesday 24 March 2026 05:42:16 +0000 (0:00:01.126) 0:52:57.791 ********* 2026-03-24 05:42:54.509779 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.509823 | orchestrator | 2026-03-24 05:42:54.509837 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:42:54.509852 | orchestrator | Tuesday 24 March 2026 05:42:17 +0000 (0:00:01.101) 0:52:58.893 ********* 2026-03-24 05:42:54.509864 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:42:54.509879 | orchestrator | 2026-03-24 05:42:54.509891 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:42:54.509905 | orchestrator | Tuesday 24 March 2026 05:42:19 +0000 (0:00:01.137) 0:53:00.031 ********* 2026-03-24 05:42:54.509918 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-24 05:42:54.509933 | orchestrator | 2026-03-24 05:42:54.509946 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:42:54.509960 | orchestrator | Tuesday 24 March 2026 05:42:20 +0000 (0:00:01.100) 0:53:01.131 ********* 2026-03-24 05:42:54.509974 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-24 05:42:54.509988 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-24 05:42:54.510001 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-24 05:42:54.510080 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-24 05:42:54.510100 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-24 05:42:54.510114 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-24 05:42:54.510127 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-24 05:42:54.510141 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:42:54.510156 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:42:54.510169 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:42:54.510184 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:42:54.510198 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:42:54.510211 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:42:54.510225 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:42:54.510238 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-24 05:42:54.510252 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-24 05:42:54.510266 | orchestrator | 2026-03-24 05:42:54.510296 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:42:54.510310 | orchestrator | Tuesday 24 March 2026 05:42:27 +0000 (0:00:06.776) 0:53:07.908 ********* 2026-03-24 05:42:54.510324 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-24 05:42:54.510338 | orchestrator | 2026-03-24 05:42:54.510396 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:42:54.510412 | orchestrator | Tuesday 24 March 2026 05:42:28 +0000 (0:00:01.105) 0:53:09.013 ********* 2026-03-24 05:42:54.510427 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:42:54.510443 | orchestrator | 2026-03-24 05:42:54.510457 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:42:54.510472 | orchestrator | Tuesday 24 March 2026 05:42:29 +0000 (0:00:01.493) 0:53:10.506 ********* 2026-03-24 05:42:54.510487 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:42:54.510501 | orchestrator | 2026-03-24 05:42:54.510516 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:42:54.510530 | orchestrator | Tuesday 24 March 2026 05:42:31 +0000 (0:00:02.003) 0:53:12.510 ********* 2026-03-24 05:42:54.510545 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.510640 | orchestrator | 2026-03-24 05:42:54.510656 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:42:54.510683 | orchestrator | Tuesday 24 March 2026 05:42:32 +0000 (0:00:01.120) 0:53:13.630 ********* 2026-03-24 05:42:54.510697 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.510708 | orchestrator | 2026-03-24 05:42:54.510722 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:42:54.510736 | orchestrator | Tuesday 24 March 2026 05:42:33 +0000 (0:00:01.099) 0:53:14.730 ********* 2026-03-24 05:42:54.510750 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.510763 | orchestrator | 2026-03-24 05:42:54.510777 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:42:54.510791 | orchestrator | Tuesday 24 March 2026 05:42:34 +0000 (0:00:01.097) 0:53:15.828 ********* 2026-03-24 05:42:54.510804 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.510818 | orchestrator | 2026-03-24 05:42:54.510872 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:42:54.510889 | orchestrator | Tuesday 24 March 2026 05:42:36 +0000 (0:00:01.158) 0:53:16.986 ********* 2026-03-24 05:42:54.510903 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.510915 | orchestrator | 2026-03-24 05:42:54.510928 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:42:54.510942 | orchestrator | Tuesday 24 March 2026 05:42:37 +0000 (0:00:01.113) 0:53:18.100 ********* 2026-03-24 05:42:54.510955 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.510969 | orchestrator | 2026-03-24 05:42:54.510983 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:42:54.510997 | orchestrator | Tuesday 24 March 2026 05:42:38 +0000 (0:00:01.113) 0:53:19.214 ********* 2026-03-24 05:42:54.511011 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.511024 | orchestrator | 2026-03-24 05:42:54.511038 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:42:54.511052 | orchestrator | Tuesday 24 March 2026 05:42:39 +0000 (0:00:01.120) 0:53:20.335 ********* 2026-03-24 05:42:54.511066 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.511079 | orchestrator | 2026-03-24 05:42:54.511093 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:42:54.511107 | orchestrator | Tuesday 24 March 2026 05:42:40 +0000 (0:00:01.101) 0:53:21.436 ********* 2026-03-24 05:42:54.511121 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.511134 | orchestrator | 2026-03-24 05:42:54.511148 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:42:54.511162 | orchestrator | Tuesday 24 March 2026 05:42:41 +0000 (0:00:01.106) 0:53:22.543 ********* 2026-03-24 05:42:54.511176 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.511189 | orchestrator | 2026-03-24 05:42:54.511203 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:42:54.511217 | orchestrator | Tuesday 24 March 2026 05:42:42 +0000 (0:00:01.110) 0:53:23.653 ********* 2026-03-24 05:42:54.511230 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:42:54.511244 | orchestrator | 2026-03-24 05:42:54.511257 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:42:54.511271 | orchestrator | Tuesday 24 March 2026 05:42:43 +0000 (0:00:01.115) 0:53:24.769 ********* 2026-03-24 05:42:54.511284 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:42:54.511298 | orchestrator | 2026-03-24 05:42:54.511312 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:42:54.511326 | orchestrator | Tuesday 24 March 2026 05:42:48 +0000 (0:00:04.485) 0:53:29.254 ********* 2026-03-24 05:42:54.511339 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:42:54.511353 | orchestrator | 2026-03-24 05:42:54.511457 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:42:54.511473 | orchestrator | Tuesday 24 March 2026 05:42:49 +0000 (0:00:01.161) 0:53:30.416 ********* 2026-03-24 05:42:54.511510 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-24 05:42:54.511541 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-24 05:43:55.099473 | orchestrator | 2026-03-24 05:43:55.099595 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:43:55.099613 | orchestrator | Tuesday 24 March 2026 05:42:54 +0000 (0:00:04.983) 0:53:35.399 ********* 2026-03-24 05:43:55.099625 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.099638 | orchestrator | 2026-03-24 05:43:55.099650 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:43:55.099662 | orchestrator | Tuesday 24 March 2026 05:42:55 +0000 (0:00:01.197) 0:53:36.597 ********* 2026-03-24 05:43:55.099673 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.099684 | orchestrator | 2026-03-24 05:43:55.099696 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:43:55.099708 | orchestrator | Tuesday 24 March 2026 05:42:56 +0000 (0:00:01.129) 0:53:37.727 ********* 2026-03-24 05:43:55.099719 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.099730 | orchestrator | 2026-03-24 05:43:55.099741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:43:55.099753 | orchestrator | Tuesday 24 March 2026 05:42:57 +0000 (0:00:01.124) 0:53:38.851 ********* 2026-03-24 05:43:55.099764 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.099775 | orchestrator | 2026-03-24 05:43:55.099786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:43:55.099797 | orchestrator | Tuesday 24 March 2026 05:42:59 +0000 (0:00:01.160) 0:53:40.011 ********* 2026-03-24 05:43:55.099808 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.099819 | orchestrator | 2026-03-24 05:43:55.099830 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:43:55.099841 | orchestrator | Tuesday 24 March 2026 05:43:00 +0000 (0:00:01.154) 0:53:41.166 ********* 2026-03-24 05:43:55.099852 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.099864 | orchestrator | 2026-03-24 05:43:55.099875 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:43:55.099886 | orchestrator | Tuesday 24 March 2026 05:43:01 +0000 (0:00:01.263) 0:53:42.429 ********* 2026-03-24 05:43:55.099898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:43:55.099909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:43:55.099920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:43:55.099931 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.099942 | orchestrator | 2026-03-24 05:43:55.099953 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:43:55.099965 | orchestrator | Tuesday 24 March 2026 05:43:02 +0000 (0:00:01.392) 0:53:43.822 ********* 2026-03-24 05:43:55.099977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:43:55.099990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:43:55.100004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:43:55.100021 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.100039 | orchestrator | 2026-03-24 05:43:55.100065 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:43:55.100146 | orchestrator | Tuesday 24 March 2026 05:43:04 +0000 (0:00:01.435) 0:53:45.257 ********* 2026-03-24 05:43:55.100182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:43:55.100201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:43:55.100218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:43:55.100237 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.100255 | orchestrator | 2026-03-24 05:43:55.100273 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:43:55.100291 | orchestrator | Tuesday 24 March 2026 05:43:05 +0000 (0:00:01.404) 0:53:46.662 ********* 2026-03-24 05:43:55.100309 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.100362 | orchestrator | 2026-03-24 05:43:55.100381 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:43:55.100430 | orchestrator | Tuesday 24 March 2026 05:43:06 +0000 (0:00:01.129) 0:53:47.792 ********* 2026-03-24 05:43:55.100448 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:43:55.100464 | orchestrator | 2026-03-24 05:43:55.100480 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:43:55.100496 | orchestrator | Tuesday 24 March 2026 05:43:08 +0000 (0:00:01.349) 0:53:49.141 ********* 2026-03-24 05:43:55.100512 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.100528 | orchestrator | 2026-03-24 05:43:55.100544 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-24 05:43:55.100560 | orchestrator | Tuesday 24 March 2026 05:43:09 +0000 (0:00:01.730) 0:53:50.872 ********* 2026-03-24 05:43:55.100576 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.100593 | orchestrator | 2026-03-24 05:43:55.100609 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-24 05:43:55.100625 | orchestrator | Tuesday 24 March 2026 05:43:11 +0000 (0:00:01.089) 0:53:51.961 ********* 2026-03-24 05:43:55.100641 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3 2026-03-24 05:43:55.100657 | orchestrator | 2026-03-24 05:43:55.100673 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-24 05:43:55.100689 | orchestrator | Tuesday 24 March 2026 05:43:12 +0000 (0:00:01.396) 0:53:53.358 ********* 2026-03-24 05:43:55.100724 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-24 05:43:55.100741 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-24 05:43:55.100757 | orchestrator | 2026-03-24 05:43:55.100775 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-24 05:43:55.100792 | orchestrator | Tuesday 24 March 2026 05:43:14 +0000 (0:00:01.850) 0:53:55.209 ********* 2026-03-24 05:43:55.100809 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:43:55.100846 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 05:43:55.100895 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:43:55.100908 | orchestrator | 2026-03-24 05:43:55.100918 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:43:55.100929 | orchestrator | Tuesday 24 March 2026 05:43:17 +0000 (0:00:03.190) 0:53:58.400 ********* 2026-03-24 05:43:55.100940 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-24 05:43:55.100952 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 05:43:55.100963 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.100973 | orchestrator | 2026-03-24 05:43:55.100985 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-24 05:43:55.100995 | orchestrator | Tuesday 24 March 2026 05:43:19 +0000 (0:00:01.957) 0:54:00.357 ********* 2026-03-24 05:43:55.101006 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.101017 | orchestrator | 2026-03-24 05:43:55.101028 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-24 05:43:55.101039 | orchestrator | Tuesday 24 March 2026 05:43:20 +0000 (0:00:01.478) 0:54:01.836 ********* 2026-03-24 05:43:55.101065 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.101076 | orchestrator | 2026-03-24 05:43:55.101087 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-24 05:43:55.101098 | orchestrator | Tuesday 24 March 2026 05:43:22 +0000 (0:00:01.091) 0:54:02.927 ********* 2026-03-24 05:43:55.101109 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3 2026-03-24 05:43:55.101127 | orchestrator | 2026-03-24 05:43:55.101147 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-24 05:43:55.101166 | orchestrator | Tuesday 24 March 2026 05:43:23 +0000 (0:00:01.499) 0:54:04.427 ********* 2026-03-24 05:43:55.101185 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3 2026-03-24 05:43:55.101206 | orchestrator | 2026-03-24 05:43:55.101226 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-24 05:43:55.101238 | orchestrator | Tuesday 24 March 2026 05:43:25 +0000 (0:00:01.496) 0:54:05.923 ********* 2026-03-24 05:43:55.101249 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.101260 | orchestrator | 2026-03-24 05:43:55.101271 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-24 05:43:55.101281 | orchestrator | Tuesday 24 March 2026 05:43:27 +0000 (0:00:02.063) 0:54:07.987 ********* 2026-03-24 05:43:55.101292 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.101303 | orchestrator | 2026-03-24 05:43:55.101314 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-24 05:43:55.101324 | orchestrator | Tuesday 24 March 2026 05:43:29 +0000 (0:00:01.974) 0:54:09.961 ********* 2026-03-24 05:43:55.101335 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.101353 | orchestrator | 2026-03-24 05:43:55.101372 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-24 05:43:55.101388 | orchestrator | Tuesday 24 March 2026 05:43:31 +0000 (0:00:02.345) 0:54:12.307 ********* 2026-03-24 05:43:55.101435 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.101454 | orchestrator | 2026-03-24 05:43:55.101472 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-24 05:43:55.101490 | orchestrator | Tuesday 24 March 2026 05:43:33 +0000 (0:00:02.340) 0:54:14.648 ********* 2026-03-24 05:43:55.101509 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.101527 | orchestrator | 2026-03-24 05:43:55.101546 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-03-24 05:43:55.101563 | orchestrator | Tuesday 24 March 2026 05:43:35 +0000 (0:00:01.640) 0:54:16.288 ********* 2026-03-24 05:43:55.101581 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:43:55.101599 | orchestrator | 2026-03-24 05:43:55.101617 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-03-24 05:43:55.101636 | orchestrator | Tuesday 24 March 2026 05:43:36 +0000 (0:00:01.103) 0:54:17.392 ********* 2026-03-24 05:43:55.101654 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:43:55.101673 | orchestrator | 2026-03-24 05:43:55.101691 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-03-24 05:43:55.101709 | orchestrator | 2026-03-24 05:43:55.101728 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:43:55.101747 | orchestrator | Tuesday 24 March 2026 05:43:46 +0000 (0:00:10.346) 0:54:27.739 ********* 2026-03-24 05:43:55.101759 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-5 2026-03-24 05:43:55.101770 | orchestrator | 2026-03-24 05:43:55.101781 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:43:55.101792 | orchestrator | Tuesday 24 March 2026 05:43:48 +0000 (0:00:01.184) 0:54:28.924 ********* 2026-03-24 05:43:55.101803 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:43:55.101814 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:43:55.101824 | orchestrator | 2026-03-24 05:43:55.101835 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:43:55.101846 | orchestrator | Tuesday 24 March 2026 05:43:49 +0000 (0:00:01.552) 0:54:30.476 ********* 2026-03-24 05:43:55.101868 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:43:55.101879 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:43:55.101889 | orchestrator | 2026-03-24 05:43:55.101900 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:43:55.101911 | orchestrator | Tuesday 24 March 2026 05:43:51 +0000 (0:00:01.507) 0:54:31.984 ********* 2026-03-24 05:43:55.101922 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:43:55.101932 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:43:55.101943 | orchestrator | 2026-03-24 05:43:55.101962 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:43:55.101974 | orchestrator | Tuesday 24 March 2026 05:43:52 +0000 (0:00:01.516) 0:54:33.500 ********* 2026-03-24 05:43:55.101984 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:43:55.101995 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:43:55.102006 | orchestrator | 2026-03-24 05:43:55.102083 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:43:55.102099 | orchestrator | Tuesday 24 March 2026 05:43:53 +0000 (0:00:01.245) 0:54:34.746 ********* 2026-03-24 05:43:55.102109 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:43:55.102133 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:17.348315 | orchestrator | 2026-03-24 05:44:17.348512 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:44:17.348543 | orchestrator | Tuesday 24 March 2026 05:43:55 +0000 (0:00:01.236) 0:54:35.983 ********* 2026-03-24 05:44:17.348563 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:17.348583 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:17.348601 | orchestrator | 2026-03-24 05:44:17.348621 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:44:17.348638 | orchestrator | Tuesday 24 March 2026 05:43:56 +0000 (0:00:01.312) 0:54:37.295 ********* 2026-03-24 05:44:17.348649 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:17.348661 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:44:17.348672 | orchestrator | 2026-03-24 05:44:17.348683 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:44:17.348695 | orchestrator | Tuesday 24 March 2026 05:43:57 +0000 (0:00:01.221) 0:54:38.517 ********* 2026-03-24 05:44:17.348706 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:17.348717 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:17.348728 | orchestrator | 2026-03-24 05:44:17.348738 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:44:17.348749 | orchestrator | Tuesday 24 March 2026 05:43:58 +0000 (0:00:01.201) 0:54:39.718 ********* 2026-03-24 05:44:17.348760 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:44:17.348771 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:44:17.348781 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:44:17.348792 | orchestrator | 2026-03-24 05:44:17.348803 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:44:17.348813 | orchestrator | Tuesday 24 March 2026 05:44:00 +0000 (0:00:01.733) 0:54:41.452 ********* 2026-03-24 05:44:17.348824 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:17.348835 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:17.348846 | orchestrator | 2026-03-24 05:44:17.348858 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:44:17.348870 | orchestrator | Tuesday 24 March 2026 05:44:01 +0000 (0:00:01.338) 0:54:42.790 ********* 2026-03-24 05:44:17.348883 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:44:17.348895 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:44:17.348907 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:44:17.348919 | orchestrator | 2026-03-24 05:44:17.348932 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:44:17.348972 | orchestrator | Tuesday 24 March 2026 05:44:04 +0000 (0:00:02.818) 0:54:45.609 ********* 2026-03-24 05:44:17.348986 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 05:44:17.348998 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 05:44:17.349011 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 05:44:17.349023 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:17.349035 | orchestrator | 2026-03-24 05:44:17.349047 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:44:17.349059 | orchestrator | Tuesday 24 March 2026 05:44:06 +0000 (0:00:01.428) 0:54:47.038 ********* 2026-03-24 05:44:17.349073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:44:17.349088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:44:17.349102 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:44:17.349115 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:17.349128 | orchestrator | 2026-03-24 05:44:17.349140 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:44:17.349150 | orchestrator | Tuesday 24 March 2026 05:44:07 +0000 (0:00:01.576) 0:54:48.614 ********* 2026-03-24 05:44:17.349178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:17.349213 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:17.349225 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:17.349236 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:17.349247 | orchestrator | 2026-03-24 05:44:17.349258 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:44:17.349269 | orchestrator | Tuesday 24 March 2026 05:44:08 +0000 (0:00:01.198) 0:54:49.813 ********* 2026-03-24 05:44:17.349281 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:44:02.432923', 'end': '2026-03-24 05:44:02.480110', 'delta': '0:00:00.047187', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:44:17.349304 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:44:02.990538', 'end': '2026-03-24 05:44:03.055310', 'delta': '0:00:00.064772', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:44:17.349316 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:44:03.552911', 'end': '2026-03-24 05:44:03.599601', 'delta': '0:00:00.046690', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:44:17.349327 | orchestrator | 2026-03-24 05:44:17.349338 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:44:17.349349 | orchestrator | Tuesday 24 March 2026 05:44:10 +0000 (0:00:01.207) 0:54:51.021 ********* 2026-03-24 05:44:17.349360 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:17.349371 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:17.349381 | orchestrator | 2026-03-24 05:44:17.349392 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:44:17.349439 | orchestrator | Tuesday 24 March 2026 05:44:11 +0000 (0:00:01.347) 0:54:52.368 ********* 2026-03-24 05:44:17.349451 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:17.349462 | orchestrator | 2026-03-24 05:44:17.349473 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:44:17.349483 | orchestrator | Tuesday 24 March 2026 05:44:12 +0000 (0:00:01.009) 0:54:53.377 ********* 2026-03-24 05:44:17.349494 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:17.349505 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:17.349516 | orchestrator | 2026-03-24 05:44:17.349527 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:44:17.349543 | orchestrator | Tuesday 24 March 2026 05:44:13 +0000 (0:00:01.184) 0:54:54.562 ********* 2026-03-24 05:44:17.349554 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:44:17.349565 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:44:17.349576 | orchestrator | 2026-03-24 05:44:17.349587 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:44:17.349598 | orchestrator | Tuesday 24 March 2026 05:44:16 +0000 (0:00:02.483) 0:54:57.045 ********* 2026-03-24 05:44:17.349608 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:17.349625 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:28.538547 | orchestrator | 2026-03-24 05:44:28.538686 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:44:28.538712 | orchestrator | Tuesday 24 March 2026 05:44:17 +0000 (0:00:01.193) 0:54:58.238 ********* 2026-03-24 05:44:28.538729 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:28.538746 | orchestrator | 2026-03-24 05:44:28.538762 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:44:28.538810 | orchestrator | Tuesday 24 March 2026 05:44:18 +0000 (0:00:01.090) 0:54:59.329 ********* 2026-03-24 05:44:28.538826 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:28.538843 | orchestrator | 2026-03-24 05:44:28.538859 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:44:28.538876 | orchestrator | Tuesday 24 March 2026 05:44:19 +0000 (0:00:01.166) 0:55:00.496 ********* 2026-03-24 05:44:28.538894 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:28.538911 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:44:28.538927 | orchestrator | 2026-03-24 05:44:28.538944 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:44:28.538954 | orchestrator | Tuesday 24 March 2026 05:44:20 +0000 (0:00:01.180) 0:55:01.677 ********* 2026-03-24 05:44:28.538963 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:28.538973 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:44:28.538982 | orchestrator | 2026-03-24 05:44:28.538992 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:44:28.539002 | orchestrator | Tuesday 24 March 2026 05:44:21 +0000 (0:00:01.159) 0:55:02.837 ********* 2026-03-24 05:44:28.539014 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:28.539026 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:28.539037 | orchestrator | 2026-03-24 05:44:28.539049 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:44:28.539060 | orchestrator | Tuesday 24 March 2026 05:44:23 +0000 (0:00:01.207) 0:55:04.045 ********* 2026-03-24 05:44:28.539070 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:28.539081 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:44:28.539092 | orchestrator | 2026-03-24 05:44:28.539103 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:44:28.539114 | orchestrator | Tuesday 24 March 2026 05:44:24 +0000 (0:00:01.204) 0:55:05.249 ********* 2026-03-24 05:44:28.539125 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:28.539135 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:28.539146 | orchestrator | 2026-03-24 05:44:28.539157 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:44:28.539168 | orchestrator | Tuesday 24 March 2026 05:44:25 +0000 (0:00:01.438) 0:55:06.688 ********* 2026-03-24 05:44:28.539179 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:28.539190 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:44:28.539201 | orchestrator | 2026-03-24 05:44:28.539212 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:44:28.539224 | orchestrator | Tuesday 24 March 2026 05:44:26 +0000 (0:00:01.195) 0:55:07.883 ********* 2026-03-24 05:44:28.539235 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:44:28.539246 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:44:28.539257 | orchestrator | 2026-03-24 05:44:28.539268 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:44:28.539280 | orchestrator | Tuesday 24 March 2026 05:44:28 +0000 (0:00:01.316) 0:55:09.199 ********* 2026-03-24 05:44:28.539293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.539309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}})  2026-03-24 05:44:28.539352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:44:28.539386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}})  2026-03-24 05:44:28.539398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.539437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.539453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:44:28.539464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.539474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:44:28.539492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.539508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.539524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}})  2026-03-24 05:44:28.616769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}})  2026-03-24 05:44:28.616870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}})  2026-03-24 05:44:28.616887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:44:28.616925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.616955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}})  2026-03-24 05:44:28.616992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:44:28.617007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.617020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.617038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.617055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:28.617068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:44:28.617089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:44:29.861852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:29.861983 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:29.862006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:44:29.862131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:29.862147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}})  2026-03-24 05:44:29.862248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}})  2026-03-24 05:44:29.862277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:29.862331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:44:29.862355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:29.862382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:44:29.862396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:44:29.862473 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:44:29.862495 | orchestrator | 2026-03-24 05:44:29.862508 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:44:29.862520 | orchestrator | Tuesday 24 March 2026 05:44:29 +0000 (0:00:01.441) 0:55:10.641 ********* 2026-03-24 05:44:29.862533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.862555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965602 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965769 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965781 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965830 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965846 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:29.965889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029549 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029687 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.029727 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159791 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159864 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:44:30.159873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159890 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159898 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159935 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159943 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:44:30.159955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:45:00.997855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:45:00.997950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:45:00.997962 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.997970 | orchestrator | 2026-03-24 05:45:00.997978 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:45:00.997985 | orchestrator | Tuesday 24 March 2026 05:44:31 +0000 (0:00:01.550) 0:55:12.191 ********* 2026-03-24 05:45:00.997992 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:00.997999 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:00.998006 | orchestrator | 2026-03-24 05:45:00.998069 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:45:00.998078 | orchestrator | Tuesday 24 March 2026 05:44:32 +0000 (0:00:01.558) 0:55:13.749 ********* 2026-03-24 05:45:00.998085 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:00.998092 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:00.998099 | orchestrator | 2026-03-24 05:45:00.998106 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:45:00.998113 | orchestrator | Tuesday 24 March 2026 05:44:34 +0000 (0:00:01.187) 0:55:14.937 ********* 2026-03-24 05:45:00.998119 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:00.998126 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:00.998132 | orchestrator | 2026-03-24 05:45:00.998139 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:45:00.998145 | orchestrator | Tuesday 24 March 2026 05:44:35 +0000 (0:00:01.554) 0:55:16.491 ********* 2026-03-24 05:45:00.998152 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998158 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.998165 | orchestrator | 2026-03-24 05:45:00.998172 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:45:00.998199 | orchestrator | Tuesday 24 March 2026 05:44:36 +0000 (0:00:01.203) 0:55:17.695 ********* 2026-03-24 05:45:00.998206 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998213 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.998219 | orchestrator | 2026-03-24 05:45:00.998226 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:45:00.998233 | orchestrator | Tuesday 24 March 2026 05:44:38 +0000 (0:00:01.318) 0:55:19.013 ********* 2026-03-24 05:45:00.998239 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998245 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.998251 | orchestrator | 2026-03-24 05:45:00.998257 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:45:00.998264 | orchestrator | Tuesday 24 March 2026 05:44:39 +0000 (0:00:01.238) 0:55:20.252 ********* 2026-03-24 05:45:00.998270 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-24 05:45:00.998277 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-24 05:45:00.998283 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-24 05:45:00.998289 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-24 05:45:00.998296 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-24 05:45:00.998303 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-24 05:45:00.998310 | orchestrator | 2026-03-24 05:45:00.998317 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:45:00.998323 | orchestrator | Tuesday 24 March 2026 05:44:41 +0000 (0:00:02.174) 0:55:22.427 ********* 2026-03-24 05:45:00.998330 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 05:45:00.998337 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 05:45:00.998343 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 05:45:00.998350 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998356 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 05:45:00.998362 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 05:45:00.998369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 05:45:00.998376 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.998383 | orchestrator | 2026-03-24 05:45:00.998390 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:45:00.998396 | orchestrator | Tuesday 24 March 2026 05:44:42 +0000 (0:00:01.242) 0:55:23.669 ********* 2026-03-24 05:45:00.998418 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-5 2026-03-24 05:45:00.998444 | orchestrator | 2026-03-24 05:45:00.998452 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:45:00.998460 | orchestrator | Tuesday 24 March 2026 05:44:43 +0000 (0:00:01.230) 0:55:24.900 ********* 2026-03-24 05:45:00.998466 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998473 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.998480 | orchestrator | 2026-03-24 05:45:00.998486 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:45:00.998493 | orchestrator | Tuesday 24 March 2026 05:44:45 +0000 (0:00:01.193) 0:55:26.093 ********* 2026-03-24 05:45:00.998499 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998506 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.998512 | orchestrator | 2026-03-24 05:45:00.998518 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:45:00.998525 | orchestrator | Tuesday 24 March 2026 05:44:46 +0000 (0:00:01.234) 0:55:27.328 ********* 2026-03-24 05:45:00.998531 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998537 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:00.998543 | orchestrator | 2026-03-24 05:45:00.998550 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:45:00.998556 | orchestrator | Tuesday 24 March 2026 05:44:47 +0000 (0:00:01.199) 0:55:28.528 ********* 2026-03-24 05:45:00.998569 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:00.998575 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:00.998582 | orchestrator | 2026-03-24 05:45:00.998588 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:45:00.998595 | orchestrator | Tuesday 24 March 2026 05:44:48 +0000 (0:00:01.363) 0:55:29.891 ********* 2026-03-24 05:45:00.998601 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:45:00.998608 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:45:00.998615 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:45:00.998621 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998627 | orchestrator | 2026-03-24 05:45:00.998634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:45:00.998645 | orchestrator | Tuesday 24 March 2026 05:44:50 +0000 (0:00:01.730) 0:55:31.622 ********* 2026-03-24 05:45:00.998651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:45:00.998658 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:45:00.998664 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:45:00.998671 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998677 | orchestrator | 2026-03-24 05:45:00.998684 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:45:00.998690 | orchestrator | Tuesday 24 March 2026 05:44:52 +0000 (0:00:01.397) 0:55:33.020 ********* 2026-03-24 05:45:00.998697 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:45:00.998703 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:45:00.998709 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:45:00.998715 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:00.998722 | orchestrator | 2026-03-24 05:45:00.998728 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:45:00.998735 | orchestrator | Tuesday 24 March 2026 05:44:53 +0000 (0:00:01.389) 0:55:34.409 ********* 2026-03-24 05:45:00.998741 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:00.998748 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:00.998754 | orchestrator | 2026-03-24 05:45:00.998761 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:45:00.998767 | orchestrator | Tuesday 24 March 2026 05:44:54 +0000 (0:00:01.251) 0:55:35.661 ********* 2026-03-24 05:45:00.998774 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 05:45:00.998781 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 05:45:00.998787 | orchestrator | 2026-03-24 05:45:00.998793 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:45:00.998800 | orchestrator | Tuesday 24 March 2026 05:44:56 +0000 (0:00:01.617) 0:55:37.278 ********* 2026-03-24 05:45:00.998806 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:45:00.998812 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:45:00.998819 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:45:00.998825 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:45:00.998831 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-24 05:45:00.998837 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:45:00.998844 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:45:00.998850 | orchestrator | 2026-03-24 05:45:00.998856 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:45:00.998862 | orchestrator | Tuesday 24 March 2026 05:44:58 +0000 (0:00:02.087) 0:55:39.366 ********* 2026-03-24 05:45:00.998869 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:45:00.998880 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:45:00.998887 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:45:00.998892 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:45:00.998902 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-24 05:45:43.722092 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:45:43.722209 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:45:43.722225 | orchestrator | 2026-03-24 05:45:43.722238 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-03-24 05:45:43.722251 | orchestrator | Tuesday 24 March 2026 05:45:00 +0000 (0:00:02.520) 0:55:41.886 ********* 2026-03-24 05:45:43.722263 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.722275 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.722286 | orchestrator | 2026-03-24 05:45:43.722298 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:45:43.722309 | orchestrator | Tuesday 24 March 2026 05:45:02 +0000 (0:00:01.264) 0:55:43.150 ********* 2026-03-24 05:45:43.722319 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-5 2026-03-24 05:45:43.722331 | orchestrator | 2026-03-24 05:45:43.722342 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:45:43.722353 | orchestrator | Tuesday 24 March 2026 05:45:03 +0000 (0:00:01.505) 0:55:44.656 ********* 2026-03-24 05:45:43.722364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-5 2026-03-24 05:45:43.722375 | orchestrator | 2026-03-24 05:45:43.722386 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:45:43.722396 | orchestrator | Tuesday 24 March 2026 05:45:04 +0000 (0:00:01.181) 0:55:45.838 ********* 2026-03-24 05:45:43.722407 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.722418 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.722429 | orchestrator | 2026-03-24 05:45:43.722440 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:45:43.722490 | orchestrator | Tuesday 24 March 2026 05:45:06 +0000 (0:00:01.215) 0:55:47.053 ********* 2026-03-24 05:45:43.722502 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.722513 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.722524 | orchestrator | 2026-03-24 05:45:43.722535 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:45:43.722546 | orchestrator | Tuesday 24 March 2026 05:45:07 +0000 (0:00:01.631) 0:55:48.685 ********* 2026-03-24 05:45:43.722573 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.722586 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.722599 | orchestrator | 2026-03-24 05:45:43.722611 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:45:43.722623 | orchestrator | Tuesday 24 March 2026 05:45:09 +0000 (0:00:01.605) 0:55:50.291 ********* 2026-03-24 05:45:43.722636 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.722649 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.722661 | orchestrator | 2026-03-24 05:45:43.722673 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:45:43.722685 | orchestrator | Tuesday 24 March 2026 05:45:11 +0000 (0:00:01.621) 0:55:51.912 ********* 2026-03-24 05:45:43.722698 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.722711 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.722722 | orchestrator | 2026-03-24 05:45:43.722735 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:45:43.722747 | orchestrator | Tuesday 24 March 2026 05:45:12 +0000 (0:00:01.234) 0:55:53.146 ********* 2026-03-24 05:45:43.722760 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.722795 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.722808 | orchestrator | 2026-03-24 05:45:43.722820 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:45:43.722833 | orchestrator | Tuesday 24 March 2026 05:45:13 +0000 (0:00:01.214) 0:55:54.360 ********* 2026-03-24 05:45:43.722846 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.722858 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.722871 | orchestrator | 2026-03-24 05:45:43.722883 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:45:43.722894 | orchestrator | Tuesday 24 March 2026 05:45:14 +0000 (0:00:01.217) 0:55:55.578 ********* 2026-03-24 05:45:43.722905 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.722916 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.722927 | orchestrator | 2026-03-24 05:45:43.722938 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:45:43.722948 | orchestrator | Tuesday 24 March 2026 05:45:16 +0000 (0:00:01.642) 0:55:57.220 ********* 2026-03-24 05:45:43.722959 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.722970 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.722981 | orchestrator | 2026-03-24 05:45:43.722992 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:45:43.723003 | orchestrator | Tuesday 24 March 2026 05:45:17 +0000 (0:00:01.632) 0:55:58.853 ********* 2026-03-24 05:45:43.723013 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723024 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723035 | orchestrator | 2026-03-24 05:45:43.723046 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:45:43.723057 | orchestrator | Tuesday 24 March 2026 05:45:19 +0000 (0:00:01.231) 0:56:00.085 ********* 2026-03-24 05:45:43.723067 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723078 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723089 | orchestrator | 2026-03-24 05:45:43.723100 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:45:43.723111 | orchestrator | Tuesday 24 March 2026 05:45:20 +0000 (0:00:01.220) 0:56:01.306 ********* 2026-03-24 05:45:43.723122 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.723132 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.723143 | orchestrator | 2026-03-24 05:45:43.723154 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:45:43.723165 | orchestrator | Tuesday 24 March 2026 05:45:21 +0000 (0:00:01.201) 0:56:02.507 ********* 2026-03-24 05:45:43.723176 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.723186 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.723197 | orchestrator | 2026-03-24 05:45:43.723208 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:45:43.723238 | orchestrator | Tuesday 24 March 2026 05:45:22 +0000 (0:00:01.198) 0:56:03.706 ********* 2026-03-24 05:45:43.723249 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.723260 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.723271 | orchestrator | 2026-03-24 05:45:43.723282 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:45:43.723293 | orchestrator | Tuesday 24 March 2026 05:45:24 +0000 (0:00:01.214) 0:56:04.921 ********* 2026-03-24 05:45:43.723304 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723315 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723326 | orchestrator | 2026-03-24 05:45:43.723337 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:45:43.723348 | orchestrator | Tuesday 24 March 2026 05:45:25 +0000 (0:00:01.216) 0:56:06.137 ********* 2026-03-24 05:45:43.723358 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723372 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723389 | orchestrator | 2026-03-24 05:45:43.723407 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:45:43.723424 | orchestrator | Tuesday 24 March 2026 05:45:26 +0000 (0:00:01.277) 0:56:07.415 ********* 2026-03-24 05:45:43.723481 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723495 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723506 | orchestrator | 2026-03-24 05:45:43.723517 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:45:43.723528 | orchestrator | Tuesday 24 March 2026 05:45:28 +0000 (0:00:01.503) 0:56:08.919 ********* 2026-03-24 05:45:43.723538 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.723549 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.723560 | orchestrator | 2026-03-24 05:45:43.723571 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:45:43.723582 | orchestrator | Tuesday 24 March 2026 05:45:29 +0000 (0:00:01.286) 0:56:10.206 ********* 2026-03-24 05:45:43.723593 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:45:43.723616 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:45:43.723627 | orchestrator | 2026-03-24 05:45:43.723638 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:45:43.723649 | orchestrator | Tuesday 24 March 2026 05:45:30 +0000 (0:00:01.284) 0:56:11.490 ********* 2026-03-24 05:45:43.723659 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723670 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723681 | orchestrator | 2026-03-24 05:45:43.723698 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:45:43.723710 | orchestrator | Tuesday 24 March 2026 05:45:31 +0000 (0:00:01.221) 0:56:12.712 ********* 2026-03-24 05:45:43.723720 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723731 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723742 | orchestrator | 2026-03-24 05:45:43.723753 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:45:43.723764 | orchestrator | Tuesday 24 March 2026 05:45:33 +0000 (0:00:01.219) 0:56:13.931 ********* 2026-03-24 05:45:43.723774 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723785 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723796 | orchestrator | 2026-03-24 05:45:43.723807 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:45:43.723818 | orchestrator | Tuesday 24 March 2026 05:45:34 +0000 (0:00:01.243) 0:56:15.174 ********* 2026-03-24 05:45:43.723829 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723839 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723850 | orchestrator | 2026-03-24 05:45:43.723861 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:45:43.723872 | orchestrator | Tuesday 24 March 2026 05:45:35 +0000 (0:00:01.250) 0:56:16.424 ********* 2026-03-24 05:45:43.723883 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723894 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723904 | orchestrator | 2026-03-24 05:45:43.723915 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:45:43.723926 | orchestrator | Tuesday 24 March 2026 05:45:36 +0000 (0:00:01.186) 0:56:17.611 ********* 2026-03-24 05:45:43.723937 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.723948 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.723959 | orchestrator | 2026-03-24 05:45:43.723970 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:45:43.723980 | orchestrator | Tuesday 24 March 2026 05:45:37 +0000 (0:00:01.240) 0:56:18.851 ********* 2026-03-24 05:45:43.723991 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.724002 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.724013 | orchestrator | 2026-03-24 05:45:43.724024 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:45:43.724035 | orchestrator | Tuesday 24 March 2026 05:45:39 +0000 (0:00:01.201) 0:56:20.053 ********* 2026-03-24 05:45:43.724046 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.724057 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.724067 | orchestrator | 2026-03-24 05:45:43.724078 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:45:43.724096 | orchestrator | Tuesday 24 March 2026 05:45:40 +0000 (0:00:01.160) 0:56:21.214 ********* 2026-03-24 05:45:43.724107 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.724118 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.724129 | orchestrator | 2026-03-24 05:45:43.724140 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:45:43.724150 | orchestrator | Tuesday 24 March 2026 05:45:41 +0000 (0:00:01.163) 0:56:22.377 ********* 2026-03-24 05:45:43.724161 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.724172 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.724183 | orchestrator | 2026-03-24 05:45:43.724194 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:45:43.724205 | orchestrator | Tuesday 24 March 2026 05:45:42 +0000 (0:00:01.170) 0:56:23.548 ********* 2026-03-24 05:45:43.724215 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:45:43.724226 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:45:43.724237 | orchestrator | 2026-03-24 05:45:43.724248 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:45:43.724267 | orchestrator | Tuesday 24 March 2026 05:45:43 +0000 (0:00:01.063) 0:56:24.611 ********* 2026-03-24 05:46:29.151028 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.151172 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.151198 | orchestrator | 2026-03-24 05:46:29.151218 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:46:29.151237 | orchestrator | Tuesday 24 March 2026 05:45:44 +0000 (0:00:01.155) 0:56:25.767 ********* 2026-03-24 05:46:29.151252 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:46:29.151270 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:46:29.151286 | orchestrator | 2026-03-24 05:46:29.151304 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:46:29.151320 | orchestrator | Tuesday 24 March 2026 05:45:46 +0000 (0:00:02.027) 0:56:27.795 ********* 2026-03-24 05:46:29.151337 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:46:29.151353 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:46:29.151369 | orchestrator | 2026-03-24 05:46:29.151383 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:46:29.151397 | orchestrator | Tuesday 24 March 2026 05:45:49 +0000 (0:00:02.347) 0:56:30.142 ********* 2026-03-24 05:46:29.151415 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-5 2026-03-24 05:46:29.151432 | orchestrator | 2026-03-24 05:46:29.151449 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:46:29.151465 | orchestrator | Tuesday 24 March 2026 05:45:50 +0000 (0:00:01.250) 0:56:31.393 ********* 2026-03-24 05:46:29.151513 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.151530 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.151546 | orchestrator | 2026-03-24 05:46:29.151565 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:46:29.151580 | orchestrator | Tuesday 24 March 2026 05:45:51 +0000 (0:00:01.169) 0:56:32.563 ********* 2026-03-24 05:46:29.151594 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.151608 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.151623 | orchestrator | 2026-03-24 05:46:29.151639 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:46:29.151656 | orchestrator | Tuesday 24 March 2026 05:45:52 +0000 (0:00:01.129) 0:56:33.692 ********* 2026-03-24 05:46:29.151673 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:46:29.151710 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:46:29.151728 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:46:29.151745 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:46:29.151761 | orchestrator | 2026-03-24 05:46:29.151809 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:46:29.151821 | orchestrator | Tuesday 24 March 2026 05:45:54 +0000 (0:00:01.947) 0:56:35.640 ********* 2026-03-24 05:46:29.151831 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:46:29.151840 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:46:29.151850 | orchestrator | 2026-03-24 05:46:29.151866 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:46:29.151881 | orchestrator | Tuesday 24 March 2026 05:45:56 +0000 (0:00:01.489) 0:56:37.129 ********* 2026-03-24 05:46:29.151897 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.151913 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.151929 | orchestrator | 2026-03-24 05:46:29.151946 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:46:29.151963 | orchestrator | Tuesday 24 March 2026 05:45:57 +0000 (0:00:01.170) 0:56:38.300 ********* 2026-03-24 05:46:29.151982 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.151998 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152014 | orchestrator | 2026-03-24 05:46:29.152032 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:46:29.152051 | orchestrator | Tuesday 24 March 2026 05:45:58 +0000 (0:00:01.201) 0:56:39.502 ********* 2026-03-24 05:46:29.152068 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152083 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152099 | orchestrator | 2026-03-24 05:46:29.152116 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:46:29.152133 | orchestrator | Tuesday 24 March 2026 05:45:59 +0000 (0:00:01.210) 0:56:40.712 ********* 2026-03-24 05:46:29.152150 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-5 2026-03-24 05:46:29.152165 | orchestrator | 2026-03-24 05:46:29.152181 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:46:29.152197 | orchestrator | Tuesday 24 March 2026 05:46:01 +0000 (0:00:01.227) 0:56:41.940 ********* 2026-03-24 05:46:29.152213 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:46:29.152230 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:46:29.152249 | orchestrator | 2026-03-24 05:46:29.152265 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:46:29.152282 | orchestrator | Tuesday 24 March 2026 05:46:03 +0000 (0:00:02.815) 0:56:44.756 ********* 2026-03-24 05:46:29.152295 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:46:29.152307 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:46:29.152318 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:46:29.152329 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152340 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:46:29.152351 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:46:29.152362 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:46:29.152373 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152385 | orchestrator | 2026-03-24 05:46:29.152396 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:46:29.152431 | orchestrator | Tuesday 24 March 2026 05:46:05 +0000 (0:00:01.250) 0:56:46.006 ********* 2026-03-24 05:46:29.152443 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152454 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152465 | orchestrator | 2026-03-24 05:46:29.152504 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:46:29.152516 | orchestrator | Tuesday 24 March 2026 05:46:06 +0000 (0:00:01.203) 0:56:47.210 ********* 2026-03-24 05:46:29.152526 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152537 | orchestrator | 2026-03-24 05:46:29.152548 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:46:29.152572 | orchestrator | Tuesday 24 March 2026 05:46:07 +0000 (0:00:01.134) 0:56:48.344 ********* 2026-03-24 05:46:29.152585 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152596 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152606 | orchestrator | 2026-03-24 05:46:29.152616 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:46:29.152625 | orchestrator | Tuesday 24 March 2026 05:46:08 +0000 (0:00:01.222) 0:56:49.567 ********* 2026-03-24 05:46:29.152635 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152644 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152654 | orchestrator | 2026-03-24 05:46:29.152663 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:46:29.152673 | orchestrator | Tuesday 24 March 2026 05:46:09 +0000 (0:00:01.233) 0:56:50.800 ********* 2026-03-24 05:46:29.152682 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152692 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152701 | orchestrator | 2026-03-24 05:46:29.152711 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:46:29.152720 | orchestrator | Tuesday 24 March 2026 05:46:11 +0000 (0:00:01.244) 0:56:52.045 ********* 2026-03-24 05:46:29.152730 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:46:29.152741 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:46:29.152757 | orchestrator | 2026-03-24 05:46:29.152773 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:46:29.152789 | orchestrator | Tuesday 24 March 2026 05:46:13 +0000 (0:00:02.591) 0:56:54.636 ********* 2026-03-24 05:46:29.152806 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:46:29.152823 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:46:29.152839 | orchestrator | 2026-03-24 05:46:29.152854 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:46:29.152874 | orchestrator | Tuesday 24 March 2026 05:46:14 +0000 (0:00:01.243) 0:56:55.879 ********* 2026-03-24 05:46:29.152887 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-5 2026-03-24 05:46:29.152904 | orchestrator | 2026-03-24 05:46:29.152920 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:46:29.152935 | orchestrator | Tuesday 24 March 2026 05:46:16 +0000 (0:00:01.495) 0:56:57.375 ********* 2026-03-24 05:46:29.152951 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.152968 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.152985 | orchestrator | 2026-03-24 05:46:29.153001 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:46:29.153017 | orchestrator | Tuesday 24 March 2026 05:46:17 +0000 (0:00:01.212) 0:56:58.588 ********* 2026-03-24 05:46:29.153033 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.153044 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.153053 | orchestrator | 2026-03-24 05:46:29.153063 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:46:29.153073 | orchestrator | Tuesday 24 March 2026 05:46:18 +0000 (0:00:01.264) 0:56:59.853 ********* 2026-03-24 05:46:29.153082 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.153092 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.153101 | orchestrator | 2026-03-24 05:46:29.153110 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:46:29.153120 | orchestrator | Tuesday 24 March 2026 05:46:20 +0000 (0:00:01.206) 0:57:01.059 ********* 2026-03-24 05:46:29.153130 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.153139 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.153149 | orchestrator | 2026-03-24 05:46:29.153158 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:46:29.153167 | orchestrator | Tuesday 24 March 2026 05:46:21 +0000 (0:00:01.285) 0:57:02.344 ********* 2026-03-24 05:46:29.153177 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.153196 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.153205 | orchestrator | 2026-03-24 05:46:29.153215 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:46:29.153225 | orchestrator | Tuesday 24 March 2026 05:46:22 +0000 (0:00:01.219) 0:57:03.564 ********* 2026-03-24 05:46:29.153234 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.153244 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.153253 | orchestrator | 2026-03-24 05:46:29.153263 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:46:29.153273 | orchestrator | Tuesday 24 March 2026 05:46:23 +0000 (0:00:01.254) 0:57:04.818 ********* 2026-03-24 05:46:29.153282 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.153291 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.153301 | orchestrator | 2026-03-24 05:46:29.153310 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:46:29.153320 | orchestrator | Tuesday 24 March 2026 05:46:25 +0000 (0:00:01.520) 0:57:06.339 ********* 2026-03-24 05:46:29.153329 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:46:29.153339 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:46:29.153348 | orchestrator | 2026-03-24 05:46:29.153358 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:46:29.153368 | orchestrator | Tuesday 24 March 2026 05:46:26 +0000 (0:00:01.245) 0:57:07.584 ********* 2026-03-24 05:46:29.153377 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:46:29.153387 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:46:29.153397 | orchestrator | 2026-03-24 05:46:29.153406 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:46:29.153421 | orchestrator | Tuesday 24 March 2026 05:46:27 +0000 (0:00:01.239) 0:57:08.823 ********* 2026-03-24 05:46:29.153451 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-5 2026-03-24 05:47:05.886224 | orchestrator | 2026-03-24 05:47:05.886366 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:47:05.886385 | orchestrator | Tuesday 24 March 2026 05:46:29 +0000 (0:00:01.218) 0:57:10.042 ********* 2026-03-24 05:47:05.886398 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-24 05:47:05.886410 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-24 05:47:05.886422 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-24 05:47:05.886433 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-24 05:47:05.886444 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-24 05:47:05.886455 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-24 05:47:05.886466 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-24 05:47:05.886477 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-24 05:47:05.886509 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-24 05:47:05.886521 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-24 05:47:05.886532 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-24 05:47:05.886543 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-24 05:47:05.886553 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-24 05:47:05.886564 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-24 05:47:05.886576 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:47:05.886587 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:47:05.886598 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:47:05.886609 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:47:05.886620 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:47:05.886631 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:47:05.886642 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:47:05.886696 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:47:05.886708 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:47:05.886719 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:47:05.886730 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:47:05.886741 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:47:05.886752 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:47:05.886763 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:47:05.886774 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-24 05:47:05.886786 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-24 05:47:05.886806 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-24 05:47:05.886823 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-24 05:47:05.886840 | orchestrator | 2026-03-24 05:47:05.886858 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:47:05.886876 | orchestrator | Tuesday 24 March 2026 05:46:35 +0000 (0:00:06.807) 0:57:16.849 ********* 2026-03-24 05:47:05.886894 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-5 2026-03-24 05:47:05.886912 | orchestrator | 2026-03-24 05:47:05.886930 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:47:05.886947 | orchestrator | Tuesday 24 March 2026 05:46:37 +0000 (0:00:01.254) 0:57:18.104 ********* 2026-03-24 05:47:05.886966 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:47:05.886986 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:47:05.887004 | orchestrator | 2026-03-24 05:47:05.887022 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:47:05.887040 | orchestrator | Tuesday 24 March 2026 05:46:38 +0000 (0:00:01.643) 0:57:19.747 ********* 2026-03-24 05:47:05.887058 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:47:05.887076 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:47:05.887094 | orchestrator | 2026-03-24 05:47:05.887112 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:47:05.887131 | orchestrator | Tuesday 24 March 2026 05:46:40 +0000 (0:00:02.064) 0:57:21.812 ********* 2026-03-24 05:47:05.887149 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887168 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887187 | orchestrator | 2026-03-24 05:47:05.887207 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:47:05.887225 | orchestrator | Tuesday 24 March 2026 05:46:42 +0000 (0:00:01.213) 0:57:23.025 ********* 2026-03-24 05:47:05.887243 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887261 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887280 | orchestrator | 2026-03-24 05:47:05.887299 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:47:05.887318 | orchestrator | Tuesday 24 March 2026 05:46:43 +0000 (0:00:01.223) 0:57:24.249 ********* 2026-03-24 05:47:05.887335 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887347 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887358 | orchestrator | 2026-03-24 05:47:05.887389 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:47:05.887401 | orchestrator | Tuesday 24 March 2026 05:46:44 +0000 (0:00:01.500) 0:57:25.750 ********* 2026-03-24 05:47:05.887412 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887436 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887447 | orchestrator | 2026-03-24 05:47:05.887457 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:47:05.887468 | orchestrator | Tuesday 24 March 2026 05:46:46 +0000 (0:00:01.248) 0:57:26.998 ********* 2026-03-24 05:47:05.887479 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887533 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887545 | orchestrator | 2026-03-24 05:47:05.887556 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:47:05.887567 | orchestrator | Tuesday 24 March 2026 05:46:47 +0000 (0:00:01.210) 0:57:28.209 ********* 2026-03-24 05:47:05.887578 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887589 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887600 | orchestrator | 2026-03-24 05:47:05.887611 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:47:05.887622 | orchestrator | Tuesday 24 March 2026 05:46:48 +0000 (0:00:01.214) 0:57:29.423 ********* 2026-03-24 05:47:05.887633 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887644 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887654 | orchestrator | 2026-03-24 05:47:05.887665 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:47:05.887676 | orchestrator | Tuesday 24 March 2026 05:46:49 +0000 (0:00:01.243) 0:57:30.666 ********* 2026-03-24 05:47:05.887687 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887698 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887709 | orchestrator | 2026-03-24 05:47:05.887720 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:47:05.887731 | orchestrator | Tuesday 24 March 2026 05:46:51 +0000 (0:00:01.239) 0:57:31.906 ********* 2026-03-24 05:47:05.887742 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887753 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887764 | orchestrator | 2026-03-24 05:47:05.887783 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:47:05.887795 | orchestrator | Tuesday 24 March 2026 05:46:52 +0000 (0:00:01.244) 0:57:33.151 ********* 2026-03-24 05:47:05.887806 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887817 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887828 | orchestrator | 2026-03-24 05:47:05.887838 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:47:05.887850 | orchestrator | Tuesday 24 March 2026 05:46:53 +0000 (0:00:01.210) 0:57:34.361 ********* 2026-03-24 05:47:05.887861 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:05.887872 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:05.887882 | orchestrator | 2026-03-24 05:47:05.887893 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:47:05.887904 | orchestrator | Tuesday 24 March 2026 05:46:54 +0000 (0:00:01.224) 0:57:35.586 ********* 2026-03-24 05:47:05.887917 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:47:05.887943 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:47:05.887964 | orchestrator | 2026-03-24 05:47:05.887990 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:47:05.888007 | orchestrator | Tuesday 24 March 2026 05:46:59 +0000 (0:00:04.575) 0:57:40.161 ********* 2026-03-24 05:47:05.888025 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:47:05.888043 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:47:05.888061 | orchestrator | 2026-03-24 05:47:05.888078 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:47:05.888096 | orchestrator | Tuesday 24 March 2026 05:47:00 +0000 (0:00:01.359) 0:57:41.521 ********* 2026-03-24 05:47:05.888130 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-24 05:47:05.888152 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-24 05:47:05.888170 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-24 05:47:05.888205 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-24 05:47:56.112470 | orchestrator | 2026-03-24 05:47:56.112619 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:47:56.112633 | orchestrator | Tuesday 24 March 2026 05:47:05 +0000 (0:00:05.252) 0:57:46.773 ********* 2026-03-24 05:47:56.112642 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.112651 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:56.112659 | orchestrator | 2026-03-24 05:47:56.112668 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:47:56.112676 | orchestrator | Tuesday 24 March 2026 05:47:07 +0000 (0:00:01.217) 0:57:47.991 ********* 2026-03-24 05:47:56.112684 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.112692 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:56.112700 | orchestrator | 2026-03-24 05:47:56.112709 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:47:56.112718 | orchestrator | Tuesday 24 March 2026 05:47:08 +0000 (0:00:01.529) 0:57:49.520 ********* 2026-03-24 05:47:56.112726 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.112734 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:56.112742 | orchestrator | 2026-03-24 05:47:56.112750 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:47:56.112758 | orchestrator | Tuesday 24 March 2026 05:47:09 +0000 (0:00:01.234) 0:57:50.754 ********* 2026-03-24 05:47:56.112766 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.112774 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:56.112782 | orchestrator | 2026-03-24 05:47:56.112789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:47:56.112797 | orchestrator | Tuesday 24 March 2026 05:47:11 +0000 (0:00:01.261) 0:57:52.016 ********* 2026-03-24 05:47:56.112805 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.112814 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:56.112822 | orchestrator | 2026-03-24 05:47:56.112830 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:47:56.112852 | orchestrator | Tuesday 24 March 2026 05:47:12 +0000 (0:00:01.224) 0:57:53.240 ********* 2026-03-24 05:47:56.112861 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.112869 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.112877 | orchestrator | 2026-03-24 05:47:56.112885 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:47:56.112893 | orchestrator | Tuesday 24 March 2026 05:47:13 +0000 (0:00:01.346) 0:57:54.587 ********* 2026-03-24 05:47:56.112922 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:47:56.112931 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:47:56.112938 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:47:56.112946 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.112954 | orchestrator | 2026-03-24 05:47:56.112962 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:47:56.112970 | orchestrator | Tuesday 24 March 2026 05:47:15 +0000 (0:00:01.421) 0:57:56.009 ********* 2026-03-24 05:47:56.112977 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:47:56.112985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:47:56.112993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:47:56.113001 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.113008 | orchestrator | 2026-03-24 05:47:56.113016 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:47:56.113025 | orchestrator | Tuesday 24 March 2026 05:47:16 +0000 (0:00:01.380) 0:57:57.389 ********* 2026-03-24 05:47:56.113034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:47:56.113043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:47:56.113052 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:47:56.113071 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.113080 | orchestrator | 2026-03-24 05:47:56.113089 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:47:56.113098 | orchestrator | Tuesday 24 March 2026 05:47:18 +0000 (0:00:01.808) 0:57:59.198 ********* 2026-03-24 05:47:56.113107 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.113116 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.113125 | orchestrator | 2026-03-24 05:47:56.113135 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:47:56.113144 | orchestrator | Tuesday 24 March 2026 05:47:19 +0000 (0:00:01.339) 0:58:00.538 ********* 2026-03-24 05:47:56.113152 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 05:47:56.113160 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 05:47:56.113172 | orchestrator | 2026-03-24 05:47:56.113185 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:47:56.113198 | orchestrator | Tuesday 24 March 2026 05:47:21 +0000 (0:00:01.472) 0:58:02.010 ********* 2026-03-24 05:47:56.113211 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.113234 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.113248 | orchestrator | 2026-03-24 05:47:56.113261 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-24 05:47:56.113275 | orchestrator | Tuesday 24 March 2026 05:47:22 +0000 (0:00:01.866) 0:58:03.877 ********* 2026-03-24 05:47:56.113288 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.113301 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:56.113311 | orchestrator | 2026-03-24 05:47:56.113318 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-24 05:47:56.113326 | orchestrator | Tuesday 24 March 2026 05:47:24 +0000 (0:00:01.235) 0:58:05.113 ********* 2026-03-24 05:47:56.113334 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-5 2026-03-24 05:47:56.113343 | orchestrator | 2026-03-24 05:47:56.113351 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-24 05:47:56.113359 | orchestrator | Tuesday 24 March 2026 05:47:25 +0000 (0:00:01.407) 0:58:06.520 ********* 2026-03-24 05:47:56.113367 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-24 05:47:56.113390 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-24 05:47:56.113399 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-24 05:47:56.113407 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-24 05:47:56.113423 | orchestrator | 2026-03-24 05:47:56.113432 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-24 05:47:56.113439 | orchestrator | Tuesday 24 March 2026 05:47:27 +0000 (0:00:01.874) 0:58:08.394 ********* 2026-03-24 05:47:56.113447 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:47:56.113455 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 05:47:56.113463 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:47:56.113471 | orchestrator | 2026-03-24 05:47:56.113478 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:47:56.113486 | orchestrator | Tuesday 24 March 2026 05:47:30 +0000 (0:00:03.242) 0:58:11.637 ********* 2026-03-24 05:47:56.113494 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-24 05:47:56.113502 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 05:47:56.113543 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.113551 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-24 05:47:56.113559 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 05:47:56.113570 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.113584 | orchestrator | 2026-03-24 05:47:56.113598 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-24 05:47:56.113612 | orchestrator | Tuesday 24 March 2026 05:47:32 +0000 (0:00:02.016) 0:58:13.654 ********* 2026-03-24 05:47:56.113626 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.113640 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.113653 | orchestrator | 2026-03-24 05:47:56.113665 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-24 05:47:56.113687 | orchestrator | Tuesday 24 March 2026 05:47:34 +0000 (0:00:01.591) 0:58:15.245 ********* 2026-03-24 05:47:56.113702 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.113718 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:47:56.113734 | orchestrator | 2026-03-24 05:47:56.113749 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-24 05:47:56.113765 | orchestrator | Tuesday 24 March 2026 05:47:35 +0000 (0:00:01.227) 0:58:16.473 ********* 2026-03-24 05:47:56.113782 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-5 2026-03-24 05:47:56.113798 | orchestrator | 2026-03-24 05:47:56.113814 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-24 05:47:56.113830 | orchestrator | Tuesday 24 March 2026 05:47:36 +0000 (0:00:01.394) 0:58:17.867 ********* 2026-03-24 05:47:56.113846 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-5 2026-03-24 05:47:56.113862 | orchestrator | 2026-03-24 05:47:56.113877 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-24 05:47:56.113894 | orchestrator | Tuesday 24 March 2026 05:47:38 +0000 (0:00:01.213) 0:58:19.081 ********* 2026-03-24 05:47:56.113910 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.113925 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.113941 | orchestrator | 2026-03-24 05:47:56.113957 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-24 05:47:56.113974 | orchestrator | Tuesday 24 March 2026 05:47:40 +0000 (0:00:02.161) 0:58:21.243 ********* 2026-03-24 05:47:56.113990 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.114005 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.114090 | orchestrator | 2026-03-24 05:47:56.114108 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-24 05:47:56.114124 | orchestrator | Tuesday 24 March 2026 05:47:42 +0000 (0:00:01.999) 0:58:23.242 ********* 2026-03-24 05:47:56.114140 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.114156 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.114171 | orchestrator | 2026-03-24 05:47:56.114186 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-24 05:47:56.114203 | orchestrator | Tuesday 24 March 2026 05:47:44 +0000 (0:00:02.422) 0:58:25.665 ********* 2026-03-24 05:47:56.114229 | orchestrator | changed: [testbed-node-4] 2026-03-24 05:47:56.114246 | orchestrator | changed: [testbed-node-5] 2026-03-24 05:47:56.114262 | orchestrator | 2026-03-24 05:47:56.114278 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-24 05:47:56.114294 | orchestrator | Tuesday 24 March 2026 05:47:48 +0000 (0:00:03.569) 0:58:29.234 ********* 2026-03-24 05:47:56.114311 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:47:56.114326 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:47:56.114343 | orchestrator | 2026-03-24 05:47:56.114359 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-03-24 05:47:56.114374 | orchestrator | Tuesday 24 March 2026 05:47:50 +0000 (0:00:01.735) 0:58:30.970 ********* 2026-03-24 05:47:56.114391 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:47:56.114406 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:47:56.114422 | orchestrator | 2026-03-24 05:47:56.114437 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-24 05:47:56.114453 | orchestrator | 2026-03-24 05:47:56.114469 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:47:56.114484 | orchestrator | Tuesday 24 March 2026 05:47:53 +0000 (0:00:03.396) 0:58:34.366 ********* 2026-03-24 05:47:56.114501 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-24 05:47:56.114570 | orchestrator | 2026-03-24 05:47:56.114585 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:47:56.114600 | orchestrator | Tuesday 24 March 2026 05:47:54 +0000 (0:00:01.087) 0:58:35.454 ********* 2026-03-24 05:47:56.114614 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:47:56.114629 | orchestrator | 2026-03-24 05:47:56.114644 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:47:56.114673 | orchestrator | Tuesday 24 March 2026 05:47:56 +0000 (0:00:01.542) 0:58:36.996 ********* 2026-03-24 05:48:19.713449 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.713615 | orchestrator | 2026-03-24 05:48:19.713641 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:48:19.713663 | orchestrator | Tuesday 24 March 2026 05:47:57 +0000 (0:00:01.106) 0:58:38.103 ********* 2026-03-24 05:48:19.713683 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.713701 | orchestrator | 2026-03-24 05:48:19.713720 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:48:19.713738 | orchestrator | Tuesday 24 March 2026 05:47:58 +0000 (0:00:01.514) 0:58:39.617 ********* 2026-03-24 05:48:19.713757 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.713775 | orchestrator | 2026-03-24 05:48:19.713792 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:48:19.713810 | orchestrator | Tuesday 24 March 2026 05:47:59 +0000 (0:00:01.121) 0:58:40.739 ********* 2026-03-24 05:48:19.713828 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.713846 | orchestrator | 2026-03-24 05:48:19.713863 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:48:19.713881 | orchestrator | Tuesday 24 March 2026 05:48:00 +0000 (0:00:01.130) 0:58:41.869 ********* 2026-03-24 05:48:19.713900 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.713918 | orchestrator | 2026-03-24 05:48:19.713936 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:48:19.713954 | orchestrator | Tuesday 24 March 2026 05:48:02 +0000 (0:00:01.116) 0:58:42.986 ********* 2026-03-24 05:48:19.713973 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:19.713993 | orchestrator | 2026-03-24 05:48:19.714014 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:48:19.714131 | orchestrator | Tuesday 24 March 2026 05:48:03 +0000 (0:00:01.119) 0:58:44.106 ********* 2026-03-24 05:48:19.714152 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.714173 | orchestrator | 2026-03-24 05:48:19.714193 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:48:19.714264 | orchestrator | Tuesday 24 March 2026 05:48:04 +0000 (0:00:01.111) 0:58:45.218 ********* 2026-03-24 05:48:19.714286 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:48:19.714304 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:48:19.714324 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:48:19.714344 | orchestrator | 2026-03-24 05:48:19.714362 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:48:19.714382 | orchestrator | Tuesday 24 March 2026 05:48:05 +0000 (0:00:01.647) 0:58:46.865 ********* 2026-03-24 05:48:19.714399 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.714417 | orchestrator | 2026-03-24 05:48:19.714437 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:48:19.714454 | orchestrator | Tuesday 24 March 2026 05:48:07 +0000 (0:00:01.230) 0:58:48.096 ********* 2026-03-24 05:48:19.714471 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:48:19.714489 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:48:19.714507 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:48:19.714584 | orchestrator | 2026-03-24 05:48:19.714603 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:48:19.714619 | orchestrator | Tuesday 24 March 2026 05:48:10 +0000 (0:00:02.844) 0:58:50.940 ********* 2026-03-24 05:48:19.714638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 05:48:19.714656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 05:48:19.714673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 05:48:19.714691 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:19.714710 | orchestrator | 2026-03-24 05:48:19.714728 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:48:19.714745 | orchestrator | Tuesday 24 March 2026 05:48:11 +0000 (0:00:01.411) 0:58:52.351 ********* 2026-03-24 05:48:19.714765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:48:19.714788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:48:19.714806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:48:19.714822 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:19.714839 | orchestrator | 2026-03-24 05:48:19.714858 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:48:19.714876 | orchestrator | Tuesday 24 March 2026 05:48:13 +0000 (0:00:01.957) 0:58:54.309 ********* 2026-03-24 05:48:19.714926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:19.714950 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:19.714989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:19.715010 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:19.715027 | orchestrator | 2026-03-24 05:48:19.715044 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:48:19.715062 | orchestrator | Tuesday 24 March 2026 05:48:14 +0000 (0:00:01.194) 0:58:55.504 ********* 2026-03-24 05:48:19.715092 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:48:07.743879', 'end': '2026-03-24 05:48:07.784128', 'delta': '0:00:00.040249', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:48:19.715117 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:48:08.327459', 'end': '2026-03-24 05:48:08.368600', 'delta': '0:00:00.041141', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:48:19.715137 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:48:08.868475', 'end': '2026-03-24 05:48:08.917423', 'delta': '0:00:00.048948', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:48:19.715157 | orchestrator | 2026-03-24 05:48:19.715176 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:48:19.715193 | orchestrator | Tuesday 24 March 2026 05:48:15 +0000 (0:00:01.178) 0:58:56.683 ********* 2026-03-24 05:48:19.715211 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.715228 | orchestrator | 2026-03-24 05:48:19.715245 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:48:19.715262 | orchestrator | Tuesday 24 March 2026 05:48:17 +0000 (0:00:01.264) 0:58:57.947 ********* 2026-03-24 05:48:19.715278 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:19.715295 | orchestrator | 2026-03-24 05:48:19.715312 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:48:19.715346 | orchestrator | Tuesday 24 March 2026 05:48:18 +0000 (0:00:01.536) 0:58:59.484 ********* 2026-03-24 05:48:19.715367 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:19.715385 | orchestrator | 2026-03-24 05:48:19.715403 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:48:19.715438 | orchestrator | Tuesday 24 March 2026 05:48:19 +0000 (0:00:01.116) 0:59:00.601 ********* 2026-03-24 05:48:33.355373 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:48:33.355480 | orchestrator | 2026-03-24 05:48:33.355496 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:48:33.355508 | orchestrator | Tuesday 24 March 2026 05:48:21 +0000 (0:00:01.956) 0:59:02.557 ********* 2026-03-24 05:48:33.355519 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:33.355579 | orchestrator | 2026-03-24 05:48:33.355590 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:48:33.355600 | orchestrator | Tuesday 24 March 2026 05:48:22 +0000 (0:00:01.141) 0:59:03.699 ********* 2026-03-24 05:48:33.355610 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:33.355620 | orchestrator | 2026-03-24 05:48:33.355630 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:48:33.355640 | orchestrator | Tuesday 24 March 2026 05:48:23 +0000 (0:00:01.115) 0:59:04.814 ********* 2026-03-24 05:48:33.355650 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:33.355660 | orchestrator | 2026-03-24 05:48:33.355669 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:48:33.355679 | orchestrator | Tuesday 24 March 2026 05:48:25 +0000 (0:00:01.253) 0:59:06.068 ********* 2026-03-24 05:48:33.355689 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:33.355699 | orchestrator | 2026-03-24 05:48:33.355708 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:48:33.355718 | orchestrator | Tuesday 24 March 2026 05:48:26 +0000 (0:00:01.115) 0:59:07.184 ********* 2026-03-24 05:48:33.355728 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:33.355737 | orchestrator | 2026-03-24 05:48:33.355747 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:48:33.355757 | orchestrator | Tuesday 24 March 2026 05:48:27 +0000 (0:00:01.097) 0:59:08.281 ********* 2026-03-24 05:48:33.355766 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:33.355776 | orchestrator | 2026-03-24 05:48:33.355800 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:48:33.355811 | orchestrator | Tuesday 24 March 2026 05:48:28 +0000 (0:00:01.155) 0:59:09.436 ********* 2026-03-24 05:48:33.355820 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:33.355830 | orchestrator | 2026-03-24 05:48:33.355840 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:48:33.355850 | orchestrator | Tuesday 24 March 2026 05:48:29 +0000 (0:00:01.111) 0:59:10.548 ********* 2026-03-24 05:48:33.355859 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:33.355869 | orchestrator | 2026-03-24 05:48:33.355878 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:48:33.355889 | orchestrator | Tuesday 24 March 2026 05:48:30 +0000 (0:00:01.192) 0:59:11.741 ********* 2026-03-24 05:48:33.355899 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:33.355908 | orchestrator | 2026-03-24 05:48:33.355918 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:48:33.355928 | orchestrator | Tuesday 24 March 2026 05:48:31 +0000 (0:00:01.129) 0:59:12.870 ********* 2026-03-24 05:48:33.355938 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:48:33.355948 | orchestrator | 2026-03-24 05:48:33.355957 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:48:33.355967 | orchestrator | Tuesday 24 March 2026 05:48:33 +0000 (0:00:01.159) 0:59:14.030 ********* 2026-03-24 05:48:33.355979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:33.356015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}})  2026-03-24 05:48:33.356029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:48:33.356057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}})  2026-03-24 05:48:33.356069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:33.356085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:33.356096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:48:33.356107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:33.356126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:48:33.356137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:33.356154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}})  2026-03-24 05:48:34.694603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}})  2026-03-24 05:48:34.694709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:34.694728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:48:34.694756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:34.694780 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:48:34.694789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:48:34.694798 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:48:34.694807 | orchestrator | 2026-03-24 05:48:34.694815 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:48:34.694823 | orchestrator | Tuesday 24 March 2026 05:48:34 +0000 (0:00:01.339) 0:59:15.369 ********* 2026-03-24 05:48:34.694836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:34.694845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80', 'dm-uuid-LVM-QkUINr7O52VgAOJHEAoMCmh3YoWzYgIfU4aSNgsC8vPbdOb0rE3Gs8zf0BVICGgc'], 'uuids': ['53f92492-3feb-4aff-ba7b-51c07dc9f447'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:34.694859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f', 'scsi-SQEMU_QEMU_HARDDISK_f47182f1-e0cb-4bfc-90df-52f037a6948f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f47182f1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:34.694873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-D6b3sE-Mi8J-r5xO-n2lB-LdJH-IGLG-mMunFp', 'scsi-0QEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d', 'scsi-SQEMU_QEMU_HARDDISK_513f3ae0-646a-4c6d-9e1f-306e5b70376d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.935806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.935916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.935931 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.935962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.935972 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR', 'dm-uuid-CRYPT-LUKS2-0e39c5b023134ee09db3234d14233a9c-3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.935981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.936007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d21def1--f46f--5673--adc8--800ee07d688b-osd--block--4d21def1--f46f--5673--adc8--800ee07d688b', 'dm-uuid-LVM-c6LqgMG2szcm8fNU9eHkypsWtweUbc9K3fFflTGxRHefvCix0NCa6UwfCVcD61tR'], 'uuids': ['0e39c5b0-2313-4ee0-9db3-234d14233a9c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '513f3ae0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3fFflT-GxRH-efvC-ix0N-Ca6U-wfCV-cD61tR']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.936023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-pMXAFi-Igok-YJyB-0g7Y-SCVb-1IUK-zfVna9', 'scsi-0QEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b', 'scsi-SQEMU_QEMU_HARDDISK_ed299c06-0435-4936-a363-f05696f72d5b'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ed299c06', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--d7857bb6--ee47--5754--bddf--a4c3c3300a80-osd--block--d7857bb6--ee47--5754--bddf--a4c3c3300a80']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.936043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:48:35.936061 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '85facbe5', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1', 'scsi-SQEMU_QEMU_HARDDISK_85facbe5-74b3-4310-a7db-d9f42aedacb8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:49:03.797513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:49:03.797696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:49:03.797715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc', 'dm-uuid-CRYPT-LUKS2-53f924923feb4affba7b51c07dc9f447-U4aSNg-sC8v-PbdO-b0rE-3Gs8-zf0B-VICGgc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:49:03.797729 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.797743 | orchestrator | 2026-03-24 05:49:03.797755 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:49:03.797768 | orchestrator | Tuesday 24 March 2026 05:48:35 +0000 (0:00:01.455) 0:59:16.825 ********* 2026-03-24 05:49:03.797779 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:03.797791 | orchestrator | 2026-03-24 05:49:03.797803 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:49:03.797814 | orchestrator | Tuesday 24 March 2026 05:48:37 +0000 (0:00:01.510) 0:59:18.336 ********* 2026-03-24 05:49:03.797825 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:03.797836 | orchestrator | 2026-03-24 05:49:03.797847 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:49:03.797857 | orchestrator | Tuesday 24 March 2026 05:48:38 +0000 (0:00:01.126) 0:59:19.463 ********* 2026-03-24 05:49:03.797868 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:03.797879 | orchestrator | 2026-03-24 05:49:03.797890 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:49:03.797901 | orchestrator | Tuesday 24 March 2026 05:48:40 +0000 (0:00:01.493) 0:59:20.956 ********* 2026-03-24 05:49:03.797912 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.797922 | orchestrator | 2026-03-24 05:49:03.797933 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:49:03.797944 | orchestrator | Tuesday 24 March 2026 05:48:41 +0000 (0:00:01.129) 0:59:22.086 ********* 2026-03-24 05:49:03.797955 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.797966 | orchestrator | 2026-03-24 05:49:03.797977 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:49:03.797988 | orchestrator | Tuesday 24 March 2026 05:48:42 +0000 (0:00:01.240) 0:59:23.326 ********* 2026-03-24 05:49:03.797998 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798009 | orchestrator | 2026-03-24 05:49:03.798083 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:49:03.798097 | orchestrator | Tuesday 24 March 2026 05:48:43 +0000 (0:00:01.135) 0:59:24.462 ********* 2026-03-24 05:49:03.798110 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-24 05:49:03.798123 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-24 05:49:03.798136 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-24 05:49:03.798148 | orchestrator | 2026-03-24 05:49:03.798161 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:49:03.798182 | orchestrator | Tuesday 24 March 2026 05:48:45 +0000 (0:00:01.957) 0:59:26.420 ********* 2026-03-24 05:49:03.798195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-24 05:49:03.798207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-24 05:49:03.798220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-24 05:49:03.798232 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798243 | orchestrator | 2026-03-24 05:49:03.798254 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:49:03.798265 | orchestrator | Tuesday 24 March 2026 05:48:46 +0000 (0:00:01.147) 0:59:27.568 ********* 2026-03-24 05:49:03.798295 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-24 05:49:03.798307 | orchestrator | 2026-03-24 05:49:03.798319 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:49:03.798332 | orchestrator | Tuesday 24 March 2026 05:48:47 +0000 (0:00:01.119) 0:59:28.687 ********* 2026-03-24 05:49:03.798343 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798354 | orchestrator | 2026-03-24 05:49:03.798372 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:49:03.798383 | orchestrator | Tuesday 24 March 2026 05:48:48 +0000 (0:00:01.113) 0:59:29.800 ********* 2026-03-24 05:49:03.798394 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798405 | orchestrator | 2026-03-24 05:49:03.798416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:49:03.798427 | orchestrator | Tuesday 24 March 2026 05:48:50 +0000 (0:00:01.149) 0:59:30.950 ********* 2026-03-24 05:49:03.798438 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798449 | orchestrator | 2026-03-24 05:49:03.798460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:49:03.798471 | orchestrator | Tuesday 24 March 2026 05:48:51 +0000 (0:00:01.190) 0:59:32.140 ********* 2026-03-24 05:49:03.798482 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:03.798493 | orchestrator | 2026-03-24 05:49:03.798503 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:49:03.798514 | orchestrator | Tuesday 24 March 2026 05:48:52 +0000 (0:00:01.221) 0:59:33.362 ********* 2026-03-24 05:49:03.798525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:49:03.798584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:49:03.798596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:49:03.798607 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798618 | orchestrator | 2026-03-24 05:49:03.798629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:49:03.798640 | orchestrator | Tuesday 24 March 2026 05:48:53 +0000 (0:00:01.392) 0:59:34.754 ********* 2026-03-24 05:49:03.798650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:49:03.798661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:49:03.798672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:49:03.798683 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798694 | orchestrator | 2026-03-24 05:49:03.798704 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:49:03.798715 | orchestrator | Tuesday 24 March 2026 05:48:55 +0000 (0:00:01.446) 0:59:36.200 ********* 2026-03-24 05:49:03.798726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:49:03.798737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:49:03.798748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:49:03.798759 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:03.798769 | orchestrator | 2026-03-24 05:49:03.798780 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:49:03.798799 | orchestrator | Tuesday 24 March 2026 05:48:56 +0000 (0:00:01.404) 0:59:37.605 ********* 2026-03-24 05:49:03.798810 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:03.798821 | orchestrator | 2026-03-24 05:49:03.798832 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:49:03.798842 | orchestrator | Tuesday 24 March 2026 05:48:57 +0000 (0:00:01.130) 0:59:38.735 ********* 2026-03-24 05:49:03.798853 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:49:03.798864 | orchestrator | 2026-03-24 05:49:03.798875 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:49:03.798886 | orchestrator | Tuesday 24 March 2026 05:48:59 +0000 (0:00:01.309) 0:59:40.044 ********* 2026-03-24 05:49:03.798896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:49:03.798907 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:49:03.798918 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:49:03.798929 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 05:49:03.798940 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:49:03.798951 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:49:03.798961 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:49:03.798972 | orchestrator | 2026-03-24 05:49:03.798983 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:49:03.798994 | orchestrator | Tuesday 24 March 2026 05:49:01 +0000 (0:00:02.115) 0:59:42.160 ********* 2026-03-24 05:49:03.799005 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:49:03.799015 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:49:03.799026 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:49:03.799037 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-24 05:49:03.799048 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:49:03.799059 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:49:03.799070 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:49:03.799080 | orchestrator | 2026-03-24 05:49:03.799098 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-24 05:49:56.325309 | orchestrator | Tuesday 24 March 2026 05:49:03 +0000 (0:00:02.513) 0:59:44.674 ********* 2026-03-24 05:49:56.325403 | orchestrator | changed: [testbed-node-3] 2026-03-24 05:49:56.325414 | orchestrator | 2026-03-24 05:49:56.325422 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-24 05:49:56.325430 | orchestrator | Tuesday 24 March 2026 05:49:06 +0000 (0:00:02.390) 0:59:47.065 ********* 2026-03-24 05:49:56.325451 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:49:56.325459 | orchestrator | 2026-03-24 05:49:56.325466 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-24 05:49:56.325473 | orchestrator | Tuesday 24 March 2026 05:49:09 +0000 (0:00:02.952) 0:59:50.017 ********* 2026-03-24 05:49:56.325480 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:49:56.325487 | orchestrator | 2026-03-24 05:49:56.325493 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:49:56.325500 | orchestrator | Tuesday 24 March 2026 05:49:11 +0000 (0:00:02.345) 0:59:52.363 ********* 2026-03-24 05:49:56.325507 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-24 05:49:56.325533 | orchestrator | 2026-03-24 05:49:56.325540 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:49:56.325547 | orchestrator | Tuesday 24 March 2026 05:49:12 +0000 (0:00:01.182) 0:59:53.546 ********* 2026-03-24 05:49:56.325588 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-24 05:49:56.325596 | orchestrator | 2026-03-24 05:49:56.325603 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:49:56.325610 | orchestrator | Tuesday 24 March 2026 05:49:13 +0000 (0:00:01.095) 0:59:54.641 ********* 2026-03-24 05:49:56.325616 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.325623 | orchestrator | 2026-03-24 05:49:56.325629 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:49:56.325636 | orchestrator | Tuesday 24 March 2026 05:49:14 +0000 (0:00:01.108) 0:59:55.750 ********* 2026-03-24 05:49:56.325642 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.325650 | orchestrator | 2026-03-24 05:49:56.325656 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:49:56.325663 | orchestrator | Tuesday 24 March 2026 05:49:16 +0000 (0:00:01.569) 0:59:57.319 ********* 2026-03-24 05:49:56.325669 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.325680 | orchestrator | 2026-03-24 05:49:56.325691 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:49:56.325700 | orchestrator | Tuesday 24 March 2026 05:49:17 +0000 (0:00:01.545) 0:59:58.865 ********* 2026-03-24 05:49:56.325711 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.325721 | orchestrator | 2026-03-24 05:49:56.325732 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:49:56.325742 | orchestrator | Tuesday 24 March 2026 05:49:19 +0000 (0:00:01.540) 1:00:00.406 ********* 2026-03-24 05:49:56.325752 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.325762 | orchestrator | 2026-03-24 05:49:56.325772 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:49:56.325782 | orchestrator | Tuesday 24 March 2026 05:49:20 +0000 (0:00:01.109) 1:00:01.515 ********* 2026-03-24 05:49:56.325793 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.325803 | orchestrator | 2026-03-24 05:49:56.325813 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:49:56.325823 | orchestrator | Tuesday 24 March 2026 05:49:21 +0000 (0:00:01.124) 1:00:02.640 ********* 2026-03-24 05:49:56.325833 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.325843 | orchestrator | 2026-03-24 05:49:56.325854 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:49:56.325866 | orchestrator | Tuesday 24 March 2026 05:49:22 +0000 (0:00:01.125) 1:00:03.766 ********* 2026-03-24 05:49:56.325878 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.325890 | orchestrator | 2026-03-24 05:49:56.325902 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:49:56.325915 | orchestrator | Tuesday 24 March 2026 05:49:24 +0000 (0:00:01.640) 1:00:05.406 ********* 2026-03-24 05:49:56.325927 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.325940 | orchestrator | 2026-03-24 05:49:56.325953 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:49:56.325966 | orchestrator | Tuesday 24 March 2026 05:49:26 +0000 (0:00:01.663) 1:00:07.070 ********* 2026-03-24 05:49:56.325989 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326002 | orchestrator | 2026-03-24 05:49:56.326069 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:49:56.326085 | orchestrator | Tuesday 24 March 2026 05:49:27 +0000 (0:00:01.196) 1:00:08.267 ********* 2026-03-24 05:49:56.326099 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326113 | orchestrator | 2026-03-24 05:49:56.326126 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:49:56.326139 | orchestrator | Tuesday 24 March 2026 05:49:28 +0000 (0:00:01.118) 1:00:09.385 ********* 2026-03-24 05:49:56.326161 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.326174 | orchestrator | 2026-03-24 05:49:56.326186 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:49:56.326198 | orchestrator | Tuesday 24 March 2026 05:49:29 +0000 (0:00:01.163) 1:00:10.549 ********* 2026-03-24 05:49:56.326211 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.326225 | orchestrator | 2026-03-24 05:49:56.326239 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:49:56.326251 | orchestrator | Tuesday 24 March 2026 05:49:30 +0000 (0:00:01.122) 1:00:11.671 ********* 2026-03-24 05:49:56.326264 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.326277 | orchestrator | 2026-03-24 05:49:56.326311 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:49:56.326324 | orchestrator | Tuesday 24 March 2026 05:49:31 +0000 (0:00:01.155) 1:00:12.827 ********* 2026-03-24 05:49:56.326337 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326350 | orchestrator | 2026-03-24 05:49:56.326362 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:49:56.326375 | orchestrator | Tuesday 24 March 2026 05:49:33 +0000 (0:00:01.120) 1:00:13.948 ********* 2026-03-24 05:49:56.326395 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326408 | orchestrator | 2026-03-24 05:49:56.326421 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:49:56.326434 | orchestrator | Tuesday 24 March 2026 05:49:34 +0000 (0:00:01.108) 1:00:15.057 ********* 2026-03-24 05:49:56.326446 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326459 | orchestrator | 2026-03-24 05:49:56.326472 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:49:56.326484 | orchestrator | Tuesday 24 March 2026 05:49:35 +0000 (0:00:01.112) 1:00:16.169 ********* 2026-03-24 05:49:56.326496 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.326510 | orchestrator | 2026-03-24 05:49:56.326523 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:49:56.326536 | orchestrator | Tuesday 24 March 2026 05:49:36 +0000 (0:00:01.116) 1:00:17.286 ********* 2026-03-24 05:49:56.326549 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.326614 | orchestrator | 2026-03-24 05:49:56.326627 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:49:56.326639 | orchestrator | Tuesday 24 March 2026 05:49:37 +0000 (0:00:01.143) 1:00:18.429 ********* 2026-03-24 05:49:56.326651 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326663 | orchestrator | 2026-03-24 05:49:56.326674 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:49:56.326687 | orchestrator | Tuesday 24 March 2026 05:49:38 +0000 (0:00:01.111) 1:00:19.540 ********* 2026-03-24 05:49:56.326699 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326712 | orchestrator | 2026-03-24 05:49:56.326724 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:49:56.326736 | orchestrator | Tuesday 24 March 2026 05:49:39 +0000 (0:00:01.141) 1:00:20.681 ********* 2026-03-24 05:49:56.326748 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326761 | orchestrator | 2026-03-24 05:49:56.326773 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:49:56.326785 | orchestrator | Tuesday 24 March 2026 05:49:40 +0000 (0:00:01.118) 1:00:21.800 ********* 2026-03-24 05:49:56.326796 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326807 | orchestrator | 2026-03-24 05:49:56.326818 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:49:56.326829 | orchestrator | Tuesday 24 March 2026 05:49:41 +0000 (0:00:01.098) 1:00:22.899 ********* 2026-03-24 05:49:56.326839 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326849 | orchestrator | 2026-03-24 05:49:56.326859 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:49:56.326869 | orchestrator | Tuesday 24 March 2026 05:49:43 +0000 (0:00:01.090) 1:00:23.990 ********* 2026-03-24 05:49:56.326890 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326901 | orchestrator | 2026-03-24 05:49:56.326911 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:49:56.326921 | orchestrator | Tuesday 24 March 2026 05:49:44 +0000 (0:00:01.086) 1:00:25.076 ********* 2026-03-24 05:49:56.326931 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326942 | orchestrator | 2026-03-24 05:49:56.326951 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:49:56.326963 | orchestrator | Tuesday 24 March 2026 05:49:45 +0000 (0:00:01.122) 1:00:26.198 ********* 2026-03-24 05:49:56.326973 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.326982 | orchestrator | 2026-03-24 05:49:56.326992 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:49:56.327002 | orchestrator | Tuesday 24 March 2026 05:49:46 +0000 (0:00:01.118) 1:00:27.317 ********* 2026-03-24 05:49:56.327012 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.327021 | orchestrator | 2026-03-24 05:49:56.327031 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:49:56.327042 | orchestrator | Tuesday 24 March 2026 05:49:47 +0000 (0:00:01.100) 1:00:28.418 ********* 2026-03-24 05:49:56.327051 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.327061 | orchestrator | 2026-03-24 05:49:56.327071 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:49:56.327080 | orchestrator | Tuesday 24 March 2026 05:49:48 +0000 (0:00:01.090) 1:00:29.508 ********* 2026-03-24 05:49:56.327091 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.327100 | orchestrator | 2026-03-24 05:49:56.327110 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:49:56.327120 | orchestrator | Tuesday 24 March 2026 05:49:49 +0000 (0:00:01.106) 1:00:30.614 ********* 2026-03-24 05:49:56.327130 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:49:56.327139 | orchestrator | 2026-03-24 05:49:56.327149 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:49:56.327159 | orchestrator | Tuesday 24 March 2026 05:49:50 +0000 (0:00:01.122) 1:00:31.737 ********* 2026-03-24 05:49:56.327168 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.327179 | orchestrator | 2026-03-24 05:49:56.327188 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:49:56.327198 | orchestrator | Tuesday 24 March 2026 05:49:52 +0000 (0:00:01.953) 1:00:33.691 ********* 2026-03-24 05:49:56.327208 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:49:56.327218 | orchestrator | 2026-03-24 05:49:56.327229 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:49:56.327240 | orchestrator | Tuesday 24 March 2026 05:49:55 +0000 (0:00:02.298) 1:00:35.989 ********* 2026-03-24 05:49:56.327250 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-24 05:49:56.327262 | orchestrator | 2026-03-24 05:49:56.327272 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:49:56.327317 | orchestrator | Tuesday 24 March 2026 05:49:56 +0000 (0:00:01.222) 1:00:37.212 ********* 2026-03-24 05:50:43.151093 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.151214 | orchestrator | 2026-03-24 05:50:43.151240 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:50:43.151261 | orchestrator | Tuesday 24 March 2026 05:49:57 +0000 (0:00:01.115) 1:00:38.328 ********* 2026-03-24 05:50:43.151280 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.151298 | orchestrator | 2026-03-24 05:50:43.151326 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:50:43.151338 | orchestrator | Tuesday 24 March 2026 05:49:58 +0000 (0:00:01.135) 1:00:39.463 ********* 2026-03-24 05:50:43.151349 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:50:43.151360 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:50:43.151398 | orchestrator | 2026-03-24 05:50:43.151409 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:50:43.151420 | orchestrator | Tuesday 24 March 2026 05:50:00 +0000 (0:00:01.862) 1:00:41.326 ********* 2026-03-24 05:50:43.151431 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:50:43.151443 | orchestrator | 2026-03-24 05:50:43.151453 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:50:43.151464 | orchestrator | Tuesday 24 March 2026 05:50:01 +0000 (0:00:01.512) 1:00:42.839 ********* 2026-03-24 05:50:43.151474 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.151485 | orchestrator | 2026-03-24 05:50:43.151496 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:50:43.151506 | orchestrator | Tuesday 24 March 2026 05:50:03 +0000 (0:00:01.141) 1:00:43.981 ********* 2026-03-24 05:50:43.151517 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.151527 | orchestrator | 2026-03-24 05:50:43.151539 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:50:43.151550 | orchestrator | Tuesday 24 March 2026 05:50:04 +0000 (0:00:01.140) 1:00:45.122 ********* 2026-03-24 05:50:43.151561 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.151611 | orchestrator | 2026-03-24 05:50:43.151630 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:50:43.151650 | orchestrator | Tuesday 24 March 2026 05:50:05 +0000 (0:00:01.121) 1:00:46.243 ********* 2026-03-24 05:50:43.151668 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-24 05:50:43.151688 | orchestrator | 2026-03-24 05:50:43.151708 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:50:43.151726 | orchestrator | Tuesday 24 March 2026 05:50:06 +0000 (0:00:01.095) 1:00:47.339 ********* 2026-03-24 05:50:43.151745 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:50:43.151758 | orchestrator | 2026-03-24 05:50:43.151771 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:50:43.151789 | orchestrator | Tuesday 24 March 2026 05:50:08 +0000 (0:00:01.689) 1:00:49.029 ********* 2026-03-24 05:50:43.151808 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:50:43.151827 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:50:43.151846 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:50:43.151865 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.151885 | orchestrator | 2026-03-24 05:50:43.151906 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:50:43.151920 | orchestrator | Tuesday 24 March 2026 05:50:09 +0000 (0:00:01.128) 1:00:50.157 ********* 2026-03-24 05:50:43.151932 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.151945 | orchestrator | 2026-03-24 05:50:43.151958 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:50:43.151990 | orchestrator | Tuesday 24 March 2026 05:50:10 +0000 (0:00:01.108) 1:00:51.266 ********* 2026-03-24 05:50:43.152010 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152030 | orchestrator | 2026-03-24 05:50:43.152051 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:50:43.152070 | orchestrator | Tuesday 24 March 2026 05:50:11 +0000 (0:00:01.520) 1:00:52.786 ********* 2026-03-24 05:50:43.152089 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152108 | orchestrator | 2026-03-24 05:50:43.152128 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:50:43.152148 | orchestrator | Tuesday 24 March 2026 05:50:13 +0000 (0:00:01.128) 1:00:53.914 ********* 2026-03-24 05:50:43.152168 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152182 | orchestrator | 2026-03-24 05:50:43.152193 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:50:43.152214 | orchestrator | Tuesday 24 March 2026 05:50:14 +0000 (0:00:01.127) 1:00:55.042 ********* 2026-03-24 05:50:43.152225 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152236 | orchestrator | 2026-03-24 05:50:43.152246 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:50:43.152257 | orchestrator | Tuesday 24 March 2026 05:50:15 +0000 (0:00:01.113) 1:00:56.155 ********* 2026-03-24 05:50:43.152268 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:50:43.152278 | orchestrator | 2026-03-24 05:50:43.152289 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:50:43.152299 | orchestrator | Tuesday 24 March 2026 05:50:17 +0000 (0:00:02.554) 1:00:58.710 ********* 2026-03-24 05:50:43.152310 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:50:43.152320 | orchestrator | 2026-03-24 05:50:43.152331 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:50:43.152342 | orchestrator | Tuesday 24 March 2026 05:50:18 +0000 (0:00:01.096) 1:00:59.807 ********* 2026-03-24 05:50:43.152352 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-24 05:50:43.152363 | orchestrator | 2026-03-24 05:50:43.152374 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:50:43.152403 | orchestrator | Tuesday 24 March 2026 05:50:20 +0000 (0:00:01.105) 1:01:00.912 ********* 2026-03-24 05:50:43.152415 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152425 | orchestrator | 2026-03-24 05:50:43.152436 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:50:43.152447 | orchestrator | Tuesday 24 March 2026 05:50:21 +0000 (0:00:01.139) 1:01:02.052 ********* 2026-03-24 05:50:43.152457 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152468 | orchestrator | 2026-03-24 05:50:43.152486 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:50:43.152497 | orchestrator | Tuesday 24 March 2026 05:50:22 +0000 (0:00:01.125) 1:01:03.177 ********* 2026-03-24 05:50:43.152508 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152518 | orchestrator | 2026-03-24 05:50:43.152529 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:50:43.152540 | orchestrator | Tuesday 24 March 2026 05:50:23 +0000 (0:00:01.127) 1:01:04.305 ********* 2026-03-24 05:50:43.152550 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152561 | orchestrator | 2026-03-24 05:50:43.152597 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:50:43.152616 | orchestrator | Tuesday 24 March 2026 05:50:24 +0000 (0:00:01.129) 1:01:05.435 ********* 2026-03-24 05:50:43.152628 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152638 | orchestrator | 2026-03-24 05:50:43.152649 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:50:43.152660 | orchestrator | Tuesday 24 March 2026 05:50:25 +0000 (0:00:01.170) 1:01:06.605 ********* 2026-03-24 05:50:43.152670 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152681 | orchestrator | 2026-03-24 05:50:43.152691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:50:43.152701 | orchestrator | Tuesday 24 March 2026 05:50:26 +0000 (0:00:01.138) 1:01:07.744 ********* 2026-03-24 05:50:43.152712 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152722 | orchestrator | 2026-03-24 05:50:43.152733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:50:43.152744 | orchestrator | Tuesday 24 March 2026 05:50:27 +0000 (0:00:01.142) 1:01:08.887 ********* 2026-03-24 05:50:43.152754 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:50:43.152765 | orchestrator | 2026-03-24 05:50:43.152775 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:50:43.152786 | orchestrator | Tuesday 24 March 2026 05:50:29 +0000 (0:00:01.152) 1:01:10.040 ********* 2026-03-24 05:50:43.152797 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:50:43.152807 | orchestrator | 2026-03-24 05:50:43.152818 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:50:43.152836 | orchestrator | Tuesday 24 March 2026 05:50:30 +0000 (0:00:01.187) 1:01:11.227 ********* 2026-03-24 05:50:43.152847 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-24 05:50:43.152857 | orchestrator | 2026-03-24 05:50:43.152868 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:50:43.152878 | orchestrator | Tuesday 24 March 2026 05:50:31 +0000 (0:00:01.101) 1:01:12.329 ********* 2026-03-24 05:50:43.152889 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-24 05:50:43.152900 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-24 05:50:43.152911 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-24 05:50:43.152921 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-24 05:50:43.152932 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-24 05:50:43.152942 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-24 05:50:43.152952 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-24 05:50:43.152963 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:50:43.152973 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:50:43.152984 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:50:43.152995 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:50:43.153005 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:50:43.153016 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:50:43.153026 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:50:43.153037 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-24 05:50:43.153048 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-24 05:50:43.153058 | orchestrator | 2026-03-24 05:50:43.153069 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:50:43.153080 | orchestrator | Tuesday 24 March 2026 05:50:38 +0000 (0:00:07.040) 1:01:19.369 ********* 2026-03-24 05:50:43.153090 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-24 05:50:43.153101 | orchestrator | 2026-03-24 05:50:43.153112 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:50:43.153122 | orchestrator | Tuesday 24 March 2026 05:50:39 +0000 (0:00:01.132) 1:01:20.501 ********* 2026-03-24 05:50:43.153133 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:50:43.153145 | orchestrator | 2026-03-24 05:50:43.153156 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:50:43.153166 | orchestrator | Tuesday 24 March 2026 05:50:41 +0000 (0:00:01.549) 1:01:22.051 ********* 2026-03-24 05:50:43.153177 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:50:43.153188 | orchestrator | 2026-03-24 05:50:43.153198 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:50:43.153217 | orchestrator | Tuesday 24 March 2026 05:50:43 +0000 (0:00:01.986) 1:01:24.038 ********* 2026-03-24 05:51:33.334694 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.334795 | orchestrator | 2026-03-24 05:51:33.334807 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:51:33.334815 | orchestrator | Tuesday 24 March 2026 05:50:44 +0000 (0:00:01.101) 1:01:25.139 ********* 2026-03-24 05:51:33.334834 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.334841 | orchestrator | 2026-03-24 05:51:33.334848 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:51:33.334854 | orchestrator | Tuesday 24 March 2026 05:50:45 +0000 (0:00:01.173) 1:01:26.312 ********* 2026-03-24 05:51:33.334879 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.334885 | orchestrator | 2026-03-24 05:51:33.334892 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:51:33.334898 | orchestrator | Tuesday 24 March 2026 05:50:46 +0000 (0:00:01.111) 1:01:27.424 ********* 2026-03-24 05:51:33.334905 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.334911 | orchestrator | 2026-03-24 05:51:33.334917 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:51:33.334923 | orchestrator | Tuesday 24 March 2026 05:50:47 +0000 (0:00:01.123) 1:01:28.547 ********* 2026-03-24 05:51:33.334929 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.334935 | orchestrator | 2026-03-24 05:51:33.334942 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:51:33.334949 | orchestrator | Tuesday 24 March 2026 05:50:48 +0000 (0:00:01.104) 1:01:29.651 ********* 2026-03-24 05:51:33.334955 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.334961 | orchestrator | 2026-03-24 05:51:33.334967 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:51:33.334974 | orchestrator | Tuesday 24 March 2026 05:50:49 +0000 (0:00:01.164) 1:01:30.815 ********* 2026-03-24 05:51:33.334980 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.334986 | orchestrator | 2026-03-24 05:51:33.334992 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:51:33.334998 | orchestrator | Tuesday 24 March 2026 05:50:51 +0000 (0:00:01.137) 1:01:31.952 ********* 2026-03-24 05:51:33.335004 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335011 | orchestrator | 2026-03-24 05:51:33.335017 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:51:33.335023 | orchestrator | Tuesday 24 March 2026 05:50:52 +0000 (0:00:01.112) 1:01:33.065 ********* 2026-03-24 05:51:33.335029 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335035 | orchestrator | 2026-03-24 05:51:33.335041 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:51:33.335047 | orchestrator | Tuesday 24 March 2026 05:50:53 +0000 (0:00:01.122) 1:01:34.187 ********* 2026-03-24 05:51:33.335053 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335059 | orchestrator | 2026-03-24 05:51:33.335066 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:51:33.335074 | orchestrator | Tuesday 24 March 2026 05:50:54 +0000 (0:00:01.132) 1:01:35.320 ********* 2026-03-24 05:51:33.335084 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335097 | orchestrator | 2026-03-24 05:51:33.335112 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:51:33.335122 | orchestrator | Tuesday 24 March 2026 05:50:55 +0000 (0:00:01.218) 1:01:36.538 ********* 2026-03-24 05:51:33.335133 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:51:33.335142 | orchestrator | 2026-03-24 05:51:33.335152 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:51:33.335163 | orchestrator | Tuesday 24 March 2026 05:51:00 +0000 (0:00:04.608) 1:01:41.147 ********* 2026-03-24 05:51:33.335174 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:51:33.335186 | orchestrator | 2026-03-24 05:51:33.335197 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:51:33.335204 | orchestrator | Tuesday 24 March 2026 05:51:01 +0000 (0:00:01.167) 1:01:42.314 ********* 2026-03-24 05:51:33.335212 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-24 05:51:33.335229 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-24 05:51:33.335236 | orchestrator | 2026-03-24 05:51:33.335243 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:51:33.335249 | orchestrator | Tuesday 24 March 2026 05:51:06 +0000 (0:00:05.374) 1:01:47.689 ********* 2026-03-24 05:51:33.335255 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335261 | orchestrator | 2026-03-24 05:51:33.335267 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:51:33.335273 | orchestrator | Tuesday 24 March 2026 05:51:07 +0000 (0:00:01.150) 1:01:48.840 ********* 2026-03-24 05:51:33.335279 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335285 | orchestrator | 2026-03-24 05:51:33.335292 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:51:33.335322 | orchestrator | Tuesday 24 March 2026 05:51:09 +0000 (0:00:01.108) 1:01:49.949 ********* 2026-03-24 05:51:33.335329 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335335 | orchestrator | 2026-03-24 05:51:33.335341 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:51:33.335352 | orchestrator | Tuesday 24 March 2026 05:51:10 +0000 (0:00:01.139) 1:01:51.088 ********* 2026-03-24 05:51:33.335358 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335365 | orchestrator | 2026-03-24 05:51:33.335371 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:51:33.335377 | orchestrator | Tuesday 24 March 2026 05:51:11 +0000 (0:00:01.129) 1:01:52.218 ********* 2026-03-24 05:51:33.335383 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335389 | orchestrator | 2026-03-24 05:51:33.335395 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:51:33.335401 | orchestrator | Tuesday 24 March 2026 05:51:12 +0000 (0:00:01.166) 1:01:53.384 ********* 2026-03-24 05:51:33.335407 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:51:33.335415 | orchestrator | 2026-03-24 05:51:33.335421 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:51:33.335427 | orchestrator | Tuesday 24 March 2026 05:51:13 +0000 (0:00:01.243) 1:01:54.628 ********* 2026-03-24 05:51:33.335433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:51:33.335440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:51:33.335446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:51:33.335452 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335458 | orchestrator | 2026-03-24 05:51:33.335464 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:51:33.335470 | orchestrator | Tuesday 24 March 2026 05:51:15 +0000 (0:00:01.464) 1:01:56.093 ********* 2026-03-24 05:51:33.335476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:51:33.335482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:51:33.335489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:51:33.335495 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335501 | orchestrator | 2026-03-24 05:51:33.335510 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:51:33.335521 | orchestrator | Tuesday 24 March 2026 05:51:16 +0000 (0:00:01.408) 1:01:57.501 ********* 2026-03-24 05:51:33.335537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-24 05:51:33.335549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-24 05:51:33.335560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-24 05:51:33.335579 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335614 | orchestrator | 2026-03-24 05:51:33.335626 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:51:33.335634 | orchestrator | Tuesday 24 March 2026 05:51:17 +0000 (0:00:01.379) 1:01:58.881 ********* 2026-03-24 05:51:33.335640 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:51:33.335646 | orchestrator | 2026-03-24 05:51:33.335652 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:51:33.335659 | orchestrator | Tuesday 24 March 2026 05:51:19 +0000 (0:00:01.135) 1:02:00.017 ********* 2026-03-24 05:51:33.335665 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-24 05:51:33.335671 | orchestrator | 2026-03-24 05:51:33.335677 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:51:33.335683 | orchestrator | Tuesday 24 March 2026 05:51:20 +0000 (0:00:01.343) 1:02:01.360 ********* 2026-03-24 05:51:33.335689 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:51:33.335695 | orchestrator | 2026-03-24 05:51:33.335702 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-24 05:51:33.335708 | orchestrator | Tuesday 24 March 2026 05:51:22 +0000 (0:00:01.829) 1:02:03.190 ********* 2026-03-24 05:51:33.335714 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-03-24 05:51:33.335720 | orchestrator | 2026-03-24 05:51:33.335726 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 05:51:33.335732 | orchestrator | Tuesday 24 March 2026 05:51:23 +0000 (0:00:01.453) 1:02:04.643 ********* 2026-03-24 05:51:33.335738 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:51:33.335744 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 05:51:33.335750 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:51:33.335757 | orchestrator | 2026-03-24 05:51:33.335763 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:51:33.335769 | orchestrator | Tuesday 24 March 2026 05:51:27 +0000 (0:00:03.259) 1:02:07.902 ********* 2026-03-24 05:51:33.335775 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-24 05:51:33.335781 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-24 05:51:33.335787 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:51:33.335793 | orchestrator | 2026-03-24 05:51:33.335800 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-24 05:51:33.335806 | orchestrator | Tuesday 24 March 2026 05:51:29 +0000 (0:00:02.049) 1:02:09.952 ********* 2026-03-24 05:51:33.335812 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:51:33.335818 | orchestrator | 2026-03-24 05:51:33.335824 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-24 05:51:33.335830 | orchestrator | Tuesday 24 March 2026 05:51:30 +0000 (0:00:01.113) 1:02:11.065 ********* 2026-03-24 05:51:33.335837 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-03-24 05:51:33.335844 | orchestrator | 2026-03-24 05:51:33.335850 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-24 05:51:33.335856 | orchestrator | Tuesday 24 March 2026 05:51:31 +0000 (0:00:01.510) 1:02:12.576 ********* 2026-03-24 05:51:33.335868 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:52:48.687204 | orchestrator | 2026-03-24 05:52:48.687307 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-24 05:52:48.687320 | orchestrator | Tuesday 24 March 2026 05:51:33 +0000 (0:00:01.645) 1:02:14.222 ********* 2026-03-24 05:52:48.687343 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:52:48.687352 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-24 05:52:48.687362 | orchestrator | 2026-03-24 05:52:48.687369 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 05:52:48.687397 | orchestrator | Tuesday 24 March 2026 05:51:38 +0000 (0:00:05.221) 1:02:19.444 ********* 2026-03-24 05:52:48.687404 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:52:48.687412 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:52:48.687420 | orchestrator | 2026-03-24 05:52:48.687427 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:52:48.687434 | orchestrator | Tuesday 24 March 2026 05:51:41 +0000 (0:00:03.136) 1:02:22.580 ********* 2026-03-24 05:52:48.687440 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-24 05:52:48.687448 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:52:48.687456 | orchestrator | 2026-03-24 05:52:48.687464 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-24 05:52:48.687471 | orchestrator | Tuesday 24 March 2026 05:51:43 +0000 (0:00:02.000) 1:02:24.581 ********* 2026-03-24 05:52:48.687478 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-24 05:52:48.687485 | orchestrator | 2026-03-24 05:52:48.687492 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-24 05:52:48.687500 | orchestrator | Tuesday 24 March 2026 05:51:45 +0000 (0:00:01.504) 1:02:26.085 ********* 2026-03-24 05:52:48.687507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687543 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:52:48.687550 | orchestrator | 2026-03-24 05:52:48.687558 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-24 05:52:48.687565 | orchestrator | Tuesday 24 March 2026 05:51:47 +0000 (0:00:01.953) 1:02:28.039 ********* 2026-03-24 05:52:48.687572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:52:48.687608 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:52:48.687676 | orchestrator | 2026-03-24 05:52:48.687685 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-24 05:52:48.687693 | orchestrator | Tuesday 24 March 2026 05:51:48 +0000 (0:00:01.623) 1:02:29.662 ********* 2026-03-24 05:52:48.687700 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:52:48.687710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:52:48.687724 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:52:48.687731 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:52:48.687740 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:52:48.687747 | orchestrator | 2026-03-24 05:52:48.687754 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-24 05:52:48.687776 | orchestrator | Tuesday 24 March 2026 05:52:21 +0000 (0:00:32.524) 1:03:02.187 ********* 2026-03-24 05:52:48.687785 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:52:48.687792 | orchestrator | 2026-03-24 05:52:48.687800 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-24 05:52:48.687812 | orchestrator | Tuesday 24 March 2026 05:52:22 +0000 (0:00:01.109) 1:03:03.297 ********* 2026-03-24 05:52:48.687819 | orchestrator | skipping: [testbed-node-3] 2026-03-24 05:52:48.687827 | orchestrator | 2026-03-24 05:52:48.687834 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-24 05:52:48.687841 | orchestrator | Tuesday 24 March 2026 05:52:23 +0000 (0:00:01.107) 1:03:04.404 ********* 2026-03-24 05:52:48.687849 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-03-24 05:52:48.687856 | orchestrator | 2026-03-24 05:52:48.687863 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-24 05:52:48.687870 | orchestrator | Tuesday 24 March 2026 05:52:24 +0000 (0:00:01.486) 1:03:05.891 ********* 2026-03-24 05:52:48.687878 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-03-24 05:52:48.687886 | orchestrator | 2026-03-24 05:52:48.687893 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-24 05:52:48.687901 | orchestrator | Tuesday 24 March 2026 05:52:26 +0000 (0:00:01.448) 1:03:07.340 ********* 2026-03-24 05:52:48.687908 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:52:48.687915 | orchestrator | 2026-03-24 05:52:48.687923 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-24 05:52:48.687930 | orchestrator | Tuesday 24 March 2026 05:52:28 +0000 (0:00:02.020) 1:03:09.361 ********* 2026-03-24 05:52:48.687937 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:52:48.687944 | orchestrator | 2026-03-24 05:52:48.687952 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-24 05:52:48.687959 | orchestrator | Tuesday 24 March 2026 05:52:30 +0000 (0:00:01.923) 1:03:11.284 ********* 2026-03-24 05:52:48.687966 | orchestrator | ok: [testbed-node-3] 2026-03-24 05:52:48.687973 | orchestrator | 2026-03-24 05:52:48.687980 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-24 05:52:48.687987 | orchestrator | Tuesday 24 March 2026 05:52:32 +0000 (0:00:02.272) 1:03:13.556 ********* 2026-03-24 05:52:48.687995 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-24 05:52:48.688002 | orchestrator | 2026-03-24 05:52:48.688009 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-24 05:52:48.688016 | orchestrator | 2026-03-24 05:52:48.688023 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:52:48.688030 | orchestrator | Tuesday 24 March 2026 05:52:35 +0000 (0:00:03.152) 1:03:16.709 ********* 2026-03-24 05:52:48.688037 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-24 05:52:48.688045 | orchestrator | 2026-03-24 05:52:48.688052 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:52:48.688059 | orchestrator | Tuesday 24 March 2026 05:52:36 +0000 (0:00:01.105) 1:03:17.815 ********* 2026-03-24 05:52:48.688066 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:52:48.688081 | orchestrator | 2026-03-24 05:52:48.688089 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:52:48.688096 | orchestrator | Tuesday 24 March 2026 05:52:38 +0000 (0:00:01.456) 1:03:19.271 ********* 2026-03-24 05:52:48.688102 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:52:48.688109 | orchestrator | 2026-03-24 05:52:48.688117 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:52:48.688124 | orchestrator | Tuesday 24 March 2026 05:52:39 +0000 (0:00:01.133) 1:03:20.405 ********* 2026-03-24 05:52:48.688131 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:52:48.688138 | orchestrator | 2026-03-24 05:52:48.688145 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:52:48.688152 | orchestrator | Tuesday 24 March 2026 05:52:40 +0000 (0:00:01.452) 1:03:21.857 ********* 2026-03-24 05:52:48.688159 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:52:48.688166 | orchestrator | 2026-03-24 05:52:48.688173 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:52:48.688180 | orchestrator | Tuesday 24 March 2026 05:52:42 +0000 (0:00:01.136) 1:03:22.993 ********* 2026-03-24 05:52:48.688187 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:52:48.688194 | orchestrator | 2026-03-24 05:52:48.688201 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:52:48.688209 | orchestrator | Tuesday 24 March 2026 05:52:43 +0000 (0:00:01.153) 1:03:24.147 ********* 2026-03-24 05:52:48.688216 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:52:48.688223 | orchestrator | 2026-03-24 05:52:48.688230 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:52:48.688237 | orchestrator | Tuesday 24 March 2026 05:52:44 +0000 (0:00:01.140) 1:03:25.288 ********* 2026-03-24 05:52:48.688244 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:52:48.688251 | orchestrator | 2026-03-24 05:52:48.688258 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:52:48.688266 | orchestrator | Tuesday 24 March 2026 05:52:45 +0000 (0:00:01.177) 1:03:26.465 ********* 2026-03-24 05:52:48.688273 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:52:48.688279 | orchestrator | 2026-03-24 05:52:48.688286 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:52:48.688294 | orchestrator | Tuesday 24 March 2026 05:52:46 +0000 (0:00:01.112) 1:03:27.578 ********* 2026-03-24 05:52:48.688301 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:52:48.688308 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:52:48.688315 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:52:48.688322 | orchestrator | 2026-03-24 05:52:48.688329 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:52:48.688342 | orchestrator | Tuesday 24 March 2026 05:52:48 +0000 (0:00:01.992) 1:03:29.570 ********* 2026-03-24 05:53:13.305124 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:13.305216 | orchestrator | 2026-03-24 05:53:13.305226 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:53:13.305247 | orchestrator | Tuesday 24 March 2026 05:52:50 +0000 (0:00:01.560) 1:03:31.131 ********* 2026-03-24 05:53:13.305253 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:53:13.305259 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:53:13.305265 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:53:13.305271 | orchestrator | 2026-03-24 05:53:13.305276 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:53:13.305282 | orchestrator | Tuesday 24 March 2026 05:52:53 +0000 (0:00:02.889) 1:03:34.021 ********* 2026-03-24 05:53:13.305288 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 05:53:13.305294 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 05:53:13.305317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 05:53:13.305322 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305328 | orchestrator | 2026-03-24 05:53:13.305334 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:53:13.305339 | orchestrator | Tuesday 24 March 2026 05:52:54 +0000 (0:00:01.392) 1:03:35.414 ********* 2026-03-24 05:53:13.305347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:53:13.305355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:53:13.305361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:53:13.305366 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305372 | orchestrator | 2026-03-24 05:53:13.305377 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:53:13.305383 | orchestrator | Tuesday 24 March 2026 05:52:56 +0000 (0:00:01.657) 1:03:37.071 ********* 2026-03-24 05:53:13.305390 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:13.305398 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:13.305404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:13.305410 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305415 | orchestrator | 2026-03-24 05:53:13.305421 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:53:13.305426 | orchestrator | Tuesday 24 March 2026 05:52:57 +0000 (0:00:01.177) 1:03:38.249 ********* 2026-03-24 05:53:13.305446 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:52:50.817194', 'end': '2026-03-24 05:52:50.863030', 'delta': '0:00:00.045836', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:53:13.305464 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:52:51.400478', 'end': '2026-03-24 05:52:51.437374', 'delta': '0:00:00.036896', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:53:13.305480 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:52:51.925674', 'end': '2026-03-24 05:52:51.977775', 'delta': '0:00:00.052101', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:53:13.305489 | orchestrator | 2026-03-24 05:53:13.305501 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:53:13.305510 | orchestrator | Tuesday 24 March 2026 05:52:58 +0000 (0:00:01.193) 1:03:39.443 ********* 2026-03-24 05:53:13.305519 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:13.305527 | orchestrator | 2026-03-24 05:53:13.305536 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:53:13.305545 | orchestrator | Tuesday 24 March 2026 05:52:59 +0000 (0:00:01.262) 1:03:40.705 ********* 2026-03-24 05:53:13.305554 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305562 | orchestrator | 2026-03-24 05:53:13.305571 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:53:13.305580 | orchestrator | Tuesday 24 March 2026 05:53:01 +0000 (0:00:01.267) 1:03:41.972 ********* 2026-03-24 05:53:13.305588 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:13.305597 | orchestrator | 2026-03-24 05:53:13.305606 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:53:13.305615 | orchestrator | Tuesday 24 March 2026 05:53:02 +0000 (0:00:01.117) 1:03:43.089 ********* 2026-03-24 05:53:13.305651 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:53:13.305657 | orchestrator | 2026-03-24 05:53:13.305662 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:53:13.305668 | orchestrator | Tuesday 24 March 2026 05:53:04 +0000 (0:00:01.981) 1:03:45.071 ********* 2026-03-24 05:53:13.305675 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:13.305681 | orchestrator | 2026-03-24 05:53:13.305687 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:53:13.305693 | orchestrator | Tuesday 24 March 2026 05:53:05 +0000 (0:00:01.137) 1:03:46.208 ********* 2026-03-24 05:53:13.305699 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305705 | orchestrator | 2026-03-24 05:53:13.305712 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:53:13.305718 | orchestrator | Tuesday 24 March 2026 05:53:06 +0000 (0:00:01.082) 1:03:47.291 ********* 2026-03-24 05:53:13.305724 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305730 | orchestrator | 2026-03-24 05:53:13.305736 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:53:13.305742 | orchestrator | Tuesday 24 March 2026 05:53:07 +0000 (0:00:01.211) 1:03:48.502 ********* 2026-03-24 05:53:13.305749 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305760 | orchestrator | 2026-03-24 05:53:13.305766 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:53:13.305772 | orchestrator | Tuesday 24 March 2026 05:53:08 +0000 (0:00:01.150) 1:03:49.653 ********* 2026-03-24 05:53:13.305779 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305785 | orchestrator | 2026-03-24 05:53:13.305791 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:53:13.305797 | orchestrator | Tuesday 24 March 2026 05:53:09 +0000 (0:00:01.124) 1:03:50.778 ********* 2026-03-24 05:53:13.305803 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:13.305809 | orchestrator | 2026-03-24 05:53:13.305816 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:53:13.305822 | orchestrator | Tuesday 24 March 2026 05:53:11 +0000 (0:00:01.173) 1:03:51.951 ********* 2026-03-24 05:53:13.305828 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:13.305834 | orchestrator | 2026-03-24 05:53:13.305840 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:53:13.305846 | orchestrator | Tuesday 24 March 2026 05:53:12 +0000 (0:00:01.096) 1:03:53.048 ********* 2026-03-24 05:53:13.305853 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:13.305859 | orchestrator | 2026-03-24 05:53:13.305865 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:53:13.305879 | orchestrator | Tuesday 24 March 2026 05:53:13 +0000 (0:00:01.142) 1:03:54.190 ********* 2026-03-24 05:53:15.825203 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:15.825298 | orchestrator | 2026-03-24 05:53:15.825330 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:53:15.825342 | orchestrator | Tuesday 24 March 2026 05:53:14 +0000 (0:00:01.105) 1:03:55.296 ********* 2026-03-24 05:53:15.825353 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:15.825364 | orchestrator | 2026-03-24 05:53:15.825374 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:53:15.825384 | orchestrator | Tuesday 24 March 2026 05:53:15 +0000 (0:00:01.166) 1:03:56.462 ********* 2026-03-24 05:53:15.825395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:15.825410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}})  2026-03-24 05:53:15.825424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:53:15.825436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}})  2026-03-24 05:53:15.825468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:15.825479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:15.825516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:53:15.825536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:15.825553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:53:15.825569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:15.825586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}})  2026-03-24 05:53:15.825614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}})  2026-03-24 05:53:15.825660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:15.825708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:53:17.128375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:17.128533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:53:17.128560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:53:17.128584 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:17.128605 | orchestrator | 2026-03-24 05:53:17.128689 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:53:17.128713 | orchestrator | Tuesday 24 March 2026 05:53:16 +0000 (0:00:01.343) 1:03:57.806 ********* 2026-03-24 05:53:17.128733 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537', 'dm-uuid-LVM-XbSA2UFaicZT872yZ622ekJFj10fMwH0JEo4SleWe74iHqGgxfLakuELu16L5ly7'], 'uuids': ['b8232bef-dd2a-4f87-af94-920947facf6d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128796 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b', 'scsi-SQEMU_QEMU_HARDDISK_1a2e3e3a-174f-4e75-8feb-939a2c61d94b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1a2e3e3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-8tIlre-nDcH-d4WO-dDqe-15P3-a6Kf-hWnlPO', 'scsi-0QEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710', 'scsi-SQEMU_QEMU_HARDDISK_b0876e92-837d-465a-b4f4-3ffe4ea78710'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128929 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:17.128980 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3', 'dm-uuid-CRYPT-LUKS2-fea79c97fade4123ac0e1fedfdaf5b5c-LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415249 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d735645--9e18--5d04--8028--1696940918c0-osd--block--4d735645--9e18--5d04--8028--1696940918c0', 'dm-uuid-LVM-INeaggMNOp6FZGOuz9Heo1qvqkuW3CNULdTR9dTV2au09j7C1JoBeyXz0qK2I2U3'], 'uuids': ['fea79c97-fade-4123-ac0e-1fedfdaf5b5c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b0876e92', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['LdTR9d-TV2a-u09j-7C1J-oBey-Xz0q-K2I2U3']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415284 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-XLf3Z7-0inA-bRYZ-qYoF-9CDe-3B2E-vSOsyO', 'scsi-0QEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c', 'scsi-SQEMU_QEMU_HARDDISK_2604bb68-60c6-4ec4-9aac-15d0d9f1349c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2604bb68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a329e066--8536--5438--99e1--d9cc3f91f537-osd--block--a329e066--8536--5438--99e1--d9cc3f91f537']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415332 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '063919ee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1', 'scsi-SQEMU_QEMU_HARDDISK_063919ee-14ee-405a-807c-08e7f14724ba-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415368 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7', 'dm-uuid-CRYPT-LUKS2-b8232befdd2a4f87af94920947facf6d-JEo4Sl-eWe7-4iHq-Ggxf-Laku-ELu1-6L5ly7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:53:22.415412 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:53:22.415425 | orchestrator | 2026-03-24 05:53:22.415438 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:53:22.415458 | orchestrator | Tuesday 24 March 2026 05:53:18 +0000 (0:00:01.375) 1:03:59.181 ********* 2026-03-24 05:53:22.415470 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:22.415481 | orchestrator | 2026-03-24 05:53:22.415492 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:53:22.415503 | orchestrator | Tuesday 24 March 2026 05:53:19 +0000 (0:00:01.537) 1:04:00.719 ********* 2026-03-24 05:53:22.415514 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:22.415525 | orchestrator | 2026-03-24 05:53:22.415536 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:53:22.415547 | orchestrator | Tuesday 24 March 2026 05:53:20 +0000 (0:00:01.093) 1:04:01.812 ********* 2026-03-24 05:53:22.415558 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:53:22.415569 | orchestrator | 2026-03-24 05:53:22.415580 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:53:22.415597 | orchestrator | Tuesday 24 March 2026 05:53:22 +0000 (0:00:01.494) 1:04:03.307 ********* 2026-03-24 05:54:04.278732 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.278848 | orchestrator | 2026-03-24 05:54:04.278866 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:54:04.278880 | orchestrator | Tuesday 24 March 2026 05:53:23 +0000 (0:00:01.107) 1:04:04.414 ********* 2026-03-24 05:54:04.278891 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.278903 | orchestrator | 2026-03-24 05:54:04.278914 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:54:04.278926 | orchestrator | Tuesday 24 March 2026 05:53:25 +0000 (0:00:01.632) 1:04:06.047 ********* 2026-03-24 05:54:04.278937 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.278956 | orchestrator | 2026-03-24 05:54:04.278976 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:54:04.278995 | orchestrator | Tuesday 24 March 2026 05:53:26 +0000 (0:00:01.098) 1:04:07.145 ********* 2026-03-24 05:54:04.279014 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-24 05:54:04.279034 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-24 05:54:04.279054 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-24 05:54:04.279074 | orchestrator | 2026-03-24 05:54:04.279093 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:54:04.279114 | orchestrator | Tuesday 24 March 2026 05:53:27 +0000 (0:00:01.637) 1:04:08.782 ********* 2026-03-24 05:54:04.279133 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-24 05:54:04.279153 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-24 05:54:04.279174 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-24 05:54:04.279194 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.279214 | orchestrator | 2026-03-24 05:54:04.279236 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:54:04.279258 | orchestrator | Tuesday 24 March 2026 05:53:29 +0000 (0:00:01.149) 1:04:09.932 ********* 2026-03-24 05:54:04.279280 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-24 05:54:04.279301 | orchestrator | 2026-03-24 05:54:04.279322 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:54:04.279337 | orchestrator | Tuesday 24 March 2026 05:53:30 +0000 (0:00:01.109) 1:04:11.041 ********* 2026-03-24 05:54:04.279350 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.279363 | orchestrator | 2026-03-24 05:54:04.279376 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:54:04.279388 | orchestrator | Tuesday 24 March 2026 05:53:31 +0000 (0:00:01.144) 1:04:12.186 ********* 2026-03-24 05:54:04.279401 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.279414 | orchestrator | 2026-03-24 05:54:04.279426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:54:04.279439 | orchestrator | Tuesday 24 March 2026 05:53:32 +0000 (0:00:01.122) 1:04:13.308 ********* 2026-03-24 05:54:04.279479 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.279492 | orchestrator | 2026-03-24 05:54:04.279507 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:54:04.279527 | orchestrator | Tuesday 24 March 2026 05:53:33 +0000 (0:00:01.124) 1:04:14.433 ********* 2026-03-24 05:54:04.279545 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:04.279565 | orchestrator | 2026-03-24 05:54:04.279601 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:54:04.279620 | orchestrator | Tuesday 24 March 2026 05:53:34 +0000 (0:00:01.223) 1:04:15.656 ********* 2026-03-24 05:54:04.279664 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:54:04.279686 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:54:04.279703 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:54:04.279721 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.279739 | orchestrator | 2026-03-24 05:54:04.279757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:54:04.279777 | orchestrator | Tuesday 24 March 2026 05:53:36 +0000 (0:00:01.375) 1:04:17.032 ********* 2026-03-24 05:54:04.279796 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:54:04.279816 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:54:04.279835 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:54:04.279851 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.279862 | orchestrator | 2026-03-24 05:54:04.279873 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:54:04.279884 | orchestrator | Tuesday 24 March 2026 05:53:37 +0000 (0:00:01.700) 1:04:18.732 ********* 2026-03-24 05:54:04.279895 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:54:04.279905 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:54:04.279916 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:54:04.279927 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.279938 | orchestrator | 2026-03-24 05:54:04.279948 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:54:04.279959 | orchestrator | Tuesday 24 March 2026 05:53:39 +0000 (0:00:01.715) 1:04:20.448 ********* 2026-03-24 05:54:04.279970 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:04.279981 | orchestrator | 2026-03-24 05:54:04.279992 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:54:04.280003 | orchestrator | Tuesday 24 March 2026 05:53:40 +0000 (0:00:01.215) 1:04:21.664 ********* 2026-03-24 05:54:04.280013 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 05:54:04.280024 | orchestrator | 2026-03-24 05:54:04.280035 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:54:04.280046 | orchestrator | Tuesday 24 March 2026 05:53:42 +0000 (0:00:01.307) 1:04:22.972 ********* 2026-03-24 05:54:04.280079 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:54:04.280091 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:54:04.280102 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:54:04.280113 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:54:04.280123 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-24 05:54:04.280134 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:54:04.280145 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:54:04.280156 | orchestrator | 2026-03-24 05:54:04.280167 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:54:04.280178 | orchestrator | Tuesday 24 March 2026 05:53:43 +0000 (0:00:01.811) 1:04:24.783 ********* 2026-03-24 05:54:04.280201 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:54:04.280211 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:54:04.280222 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:54:04.280233 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:54:04.280244 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-24 05:54:04.280255 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-24 05:54:04.280265 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:54:04.280276 | orchestrator | 2026-03-24 05:54:04.280287 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-24 05:54:04.280297 | orchestrator | Tuesday 24 March 2026 05:53:46 +0000 (0:00:02.160) 1:04:26.944 ********* 2026-03-24 05:54:04.280308 | orchestrator | changed: [testbed-node-4] 2026-03-24 05:54:04.280319 | orchestrator | 2026-03-24 05:54:04.280330 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-24 05:54:04.280341 | orchestrator | Tuesday 24 March 2026 05:53:48 +0000 (0:00:01.994) 1:04:28.938 ********* 2026-03-24 05:54:04.280352 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:54:04.280363 | orchestrator | 2026-03-24 05:54:04.280374 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-24 05:54:04.280384 | orchestrator | Tuesday 24 March 2026 05:53:50 +0000 (0:00:02.765) 1:04:31.704 ********* 2026-03-24 05:54:04.280395 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:54:04.280406 | orchestrator | 2026-03-24 05:54:04.280417 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:54:04.280428 | orchestrator | Tuesday 24 March 2026 05:53:52 +0000 (0:00:02.017) 1:04:33.721 ********* 2026-03-24 05:54:04.280446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-24 05:54:04.280458 | orchestrator | 2026-03-24 05:54:04.280469 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:54:04.280479 | orchestrator | Tuesday 24 March 2026 05:53:53 +0000 (0:00:01.089) 1:04:34.811 ********* 2026-03-24 05:54:04.280490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-24 05:54:04.280501 | orchestrator | 2026-03-24 05:54:04.280512 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:54:04.280522 | orchestrator | Tuesday 24 March 2026 05:53:55 +0000 (0:00:01.131) 1:04:35.943 ********* 2026-03-24 05:54:04.280533 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.280544 | orchestrator | 2026-03-24 05:54:04.280555 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:54:04.280565 | orchestrator | Tuesday 24 March 2026 05:53:56 +0000 (0:00:01.159) 1:04:37.102 ********* 2026-03-24 05:54:04.280576 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:04.280587 | orchestrator | 2026-03-24 05:54:04.280597 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:54:04.280608 | orchestrator | Tuesday 24 March 2026 05:53:57 +0000 (0:00:01.630) 1:04:38.733 ********* 2026-03-24 05:54:04.280619 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:04.280629 | orchestrator | 2026-03-24 05:54:04.280669 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:54:04.280691 | orchestrator | Tuesday 24 March 2026 05:53:59 +0000 (0:00:01.557) 1:04:40.291 ********* 2026-03-24 05:54:04.280709 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:04.280729 | orchestrator | 2026-03-24 05:54:04.280761 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:54:04.280774 | orchestrator | Tuesday 24 March 2026 05:54:00 +0000 (0:00:01.522) 1:04:41.814 ********* 2026-03-24 05:54:04.280785 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.280796 | orchestrator | 2026-03-24 05:54:04.280807 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:54:04.280817 | orchestrator | Tuesday 24 March 2026 05:54:02 +0000 (0:00:01.125) 1:04:42.940 ********* 2026-03-24 05:54:04.280828 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.280838 | orchestrator | 2026-03-24 05:54:04.280849 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:54:04.280860 | orchestrator | Tuesday 24 March 2026 05:54:03 +0000 (0:00:01.126) 1:04:44.066 ********* 2026-03-24 05:54:04.280871 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:04.280881 | orchestrator | 2026-03-24 05:54:04.280892 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:54:04.280911 | orchestrator | Tuesday 24 March 2026 05:54:04 +0000 (0:00:01.099) 1:04:45.166 ********* 2026-03-24 05:54:43.703027 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.703175 | orchestrator | 2026-03-24 05:54:43.703201 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:54:43.703223 | orchestrator | Tuesday 24 March 2026 05:54:05 +0000 (0:00:01.543) 1:04:46.709 ********* 2026-03-24 05:54:43.703242 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.703261 | orchestrator | 2026-03-24 05:54:43.703279 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:54:43.703297 | orchestrator | Tuesday 24 March 2026 05:54:07 +0000 (0:00:01.542) 1:04:48.251 ********* 2026-03-24 05:54:43.703316 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.703335 | orchestrator | 2026-03-24 05:54:43.703354 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:54:43.703374 | orchestrator | Tuesday 24 March 2026 05:54:08 +0000 (0:00:00.763) 1:04:49.015 ********* 2026-03-24 05:54:43.703392 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.703410 | orchestrator | 2026-03-24 05:54:43.703428 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:54:43.703446 | orchestrator | Tuesday 24 March 2026 05:54:08 +0000 (0:00:00.747) 1:04:49.763 ********* 2026-03-24 05:54:43.703464 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.703483 | orchestrator | 2026-03-24 05:54:43.703501 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:54:43.703519 | orchestrator | Tuesday 24 March 2026 05:54:09 +0000 (0:00:00.776) 1:04:50.540 ********* 2026-03-24 05:54:43.703538 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.703556 | orchestrator | 2026-03-24 05:54:43.703576 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:54:43.703594 | orchestrator | Tuesday 24 March 2026 05:54:10 +0000 (0:00:00.800) 1:04:51.340 ********* 2026-03-24 05:54:43.703611 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.703627 | orchestrator | 2026-03-24 05:54:43.703646 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:54:43.703711 | orchestrator | Tuesday 24 March 2026 05:54:11 +0000 (0:00:00.777) 1:04:52.118 ********* 2026-03-24 05:54:43.703733 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.703752 | orchestrator | 2026-03-24 05:54:43.703771 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:54:43.703787 | orchestrator | Tuesday 24 March 2026 05:54:12 +0000 (0:00:00.817) 1:04:52.935 ********* 2026-03-24 05:54:43.703800 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.703812 | orchestrator | 2026-03-24 05:54:43.703825 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:54:43.703838 | orchestrator | Tuesday 24 March 2026 05:54:12 +0000 (0:00:00.756) 1:04:53.692 ********* 2026-03-24 05:54:43.703851 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.703863 | orchestrator | 2026-03-24 05:54:43.703875 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:54:43.703918 | orchestrator | Tuesday 24 March 2026 05:54:13 +0000 (0:00:00.769) 1:04:54.461 ********* 2026-03-24 05:54:43.703930 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.703941 | orchestrator | 2026-03-24 05:54:43.703952 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:54:43.703962 | orchestrator | Tuesday 24 March 2026 05:54:14 +0000 (0:00:00.771) 1:04:55.233 ********* 2026-03-24 05:54:43.703973 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.703984 | orchestrator | 2026-03-24 05:54:43.704010 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:54:43.704021 | orchestrator | Tuesday 24 March 2026 05:54:15 +0000 (0:00:00.818) 1:04:56.051 ********* 2026-03-24 05:54:43.704033 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704044 | orchestrator | 2026-03-24 05:54:43.704055 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:54:43.704065 | orchestrator | Tuesday 24 March 2026 05:54:15 +0000 (0:00:00.783) 1:04:56.835 ********* 2026-03-24 05:54:43.704076 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704087 | orchestrator | 2026-03-24 05:54:43.704098 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:54:43.704108 | orchestrator | Tuesday 24 March 2026 05:54:16 +0000 (0:00:00.749) 1:04:57.584 ********* 2026-03-24 05:54:43.704119 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704130 | orchestrator | 2026-03-24 05:54:43.704140 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:54:43.704151 | orchestrator | Tuesday 24 March 2026 05:54:17 +0000 (0:00:00.751) 1:04:58.336 ********* 2026-03-24 05:54:43.704162 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704173 | orchestrator | 2026-03-24 05:54:43.704183 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:54:43.704194 | orchestrator | Tuesday 24 March 2026 05:54:18 +0000 (0:00:00.794) 1:04:59.131 ********* 2026-03-24 05:54:43.704205 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704215 | orchestrator | 2026-03-24 05:54:43.704226 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:54:43.704237 | orchestrator | Tuesday 24 March 2026 05:54:19 +0000 (0:00:00.783) 1:04:59.914 ********* 2026-03-24 05:54:43.704248 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704258 | orchestrator | 2026-03-24 05:54:43.704269 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:54:43.704280 | orchestrator | Tuesday 24 March 2026 05:54:19 +0000 (0:00:00.771) 1:05:00.686 ********* 2026-03-24 05:54:43.704291 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704301 | orchestrator | 2026-03-24 05:54:43.704312 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:54:43.704324 | orchestrator | Tuesday 24 March 2026 05:54:20 +0000 (0:00:00.783) 1:05:01.470 ********* 2026-03-24 05:54:43.704335 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704346 | orchestrator | 2026-03-24 05:54:43.704356 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:54:43.704367 | orchestrator | Tuesday 24 March 2026 05:54:21 +0000 (0:00:00.755) 1:05:02.226 ********* 2026-03-24 05:54:43.704378 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704389 | orchestrator | 2026-03-24 05:54:43.704421 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:54:43.704433 | orchestrator | Tuesday 24 March 2026 05:54:22 +0000 (0:00:00.793) 1:05:03.019 ********* 2026-03-24 05:54:43.704444 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704455 | orchestrator | 2026-03-24 05:54:43.704465 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:54:43.704476 | orchestrator | Tuesday 24 March 2026 05:54:22 +0000 (0:00:00.754) 1:05:03.774 ********* 2026-03-24 05:54:43.704487 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704498 | orchestrator | 2026-03-24 05:54:43.704517 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:54:43.704528 | orchestrator | Tuesday 24 March 2026 05:54:23 +0000 (0:00:00.762) 1:05:04.537 ********* 2026-03-24 05:54:43.704539 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704550 | orchestrator | 2026-03-24 05:54:43.704561 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:54:43.704571 | orchestrator | Tuesday 24 March 2026 05:54:24 +0000 (0:00:00.759) 1:05:05.297 ********* 2026-03-24 05:54:43.704582 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.704593 | orchestrator | 2026-03-24 05:54:43.704604 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:54:43.704614 | orchestrator | Tuesday 24 March 2026 05:54:26 +0000 (0:00:01.646) 1:05:06.944 ********* 2026-03-24 05:54:43.704625 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.704636 | orchestrator | 2026-03-24 05:54:43.704647 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:54:43.704703 | orchestrator | Tuesday 24 March 2026 05:54:28 +0000 (0:00:01.977) 1:05:08.921 ********* 2026-03-24 05:54:43.704714 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-24 05:54:43.704726 | orchestrator | 2026-03-24 05:54:43.704737 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:54:43.704748 | orchestrator | Tuesday 24 March 2026 05:54:29 +0000 (0:00:01.115) 1:05:10.037 ********* 2026-03-24 05:54:43.704758 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704769 | orchestrator | 2026-03-24 05:54:43.704780 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:54:43.704791 | orchestrator | Tuesday 24 March 2026 05:54:30 +0000 (0:00:01.096) 1:05:11.133 ********* 2026-03-24 05:54:43.704801 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704812 | orchestrator | 2026-03-24 05:54:43.704822 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:54:43.704833 | orchestrator | Tuesday 24 March 2026 05:54:31 +0000 (0:00:01.146) 1:05:12.280 ********* 2026-03-24 05:54:43.704844 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:54:43.704855 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:54:43.704866 | orchestrator | 2026-03-24 05:54:43.704876 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:54:43.704887 | orchestrator | Tuesday 24 March 2026 05:54:33 +0000 (0:00:01.861) 1:05:14.142 ********* 2026-03-24 05:54:43.704898 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.704908 | orchestrator | 2026-03-24 05:54:43.704919 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:54:43.704936 | orchestrator | Tuesday 24 March 2026 05:54:34 +0000 (0:00:01.444) 1:05:15.587 ********* 2026-03-24 05:54:43.704947 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.704957 | orchestrator | 2026-03-24 05:54:43.704968 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:54:43.704979 | orchestrator | Tuesday 24 March 2026 05:54:35 +0000 (0:00:01.201) 1:05:16.788 ********* 2026-03-24 05:54:43.704990 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.705000 | orchestrator | 2026-03-24 05:54:43.705011 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:54:43.705022 | orchestrator | Tuesday 24 March 2026 05:54:36 +0000 (0:00:00.792) 1:05:17.581 ********* 2026-03-24 05:54:43.705032 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.705043 | orchestrator | 2026-03-24 05:54:43.705054 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:54:43.705065 | orchestrator | Tuesday 24 March 2026 05:54:37 +0000 (0:00:00.760) 1:05:18.342 ********* 2026-03-24 05:54:43.705075 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-24 05:54:43.705086 | orchestrator | 2026-03-24 05:54:43.705106 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:54:43.705117 | orchestrator | Tuesday 24 March 2026 05:54:38 +0000 (0:00:01.118) 1:05:19.461 ********* 2026-03-24 05:54:43.705127 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:54:43.705138 | orchestrator | 2026-03-24 05:54:43.705149 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:54:43.705159 | orchestrator | Tuesday 24 March 2026 05:54:40 +0000 (0:00:01.733) 1:05:21.195 ********* 2026-03-24 05:54:43.705170 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:54:43.705181 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:54:43.705192 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:54:43.705202 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.705213 | orchestrator | 2026-03-24 05:54:43.705224 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:54:43.705235 | orchestrator | Tuesday 24 March 2026 05:54:41 +0000 (0:00:01.114) 1:05:22.310 ********* 2026-03-24 05:54:43.705245 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.705256 | orchestrator | 2026-03-24 05:54:43.705267 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:54:43.705277 | orchestrator | Tuesday 24 March 2026 05:54:42 +0000 (0:00:01.123) 1:05:23.433 ********* 2026-03-24 05:54:43.705288 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:54:43.705299 | orchestrator | 2026-03-24 05:54:43.705317 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:55:26.627014 | orchestrator | Tuesday 24 March 2026 05:54:43 +0000 (0:00:01.159) 1:05:24.593 ********* 2026-03-24 05:55:26.627112 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627145 | orchestrator | 2026-03-24 05:55:26.627167 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:55:26.627178 | orchestrator | Tuesday 24 March 2026 05:54:44 +0000 (0:00:01.183) 1:05:25.776 ********* 2026-03-24 05:55:26.627189 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627200 | orchestrator | 2026-03-24 05:55:26.627211 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:55:26.627222 | orchestrator | Tuesday 24 March 2026 05:54:46 +0000 (0:00:01.160) 1:05:26.937 ********* 2026-03-24 05:55:26.627232 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627244 | orchestrator | 2026-03-24 05:55:26.627251 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:55:26.627258 | orchestrator | Tuesday 24 March 2026 05:54:46 +0000 (0:00:00.788) 1:05:27.725 ********* 2026-03-24 05:55:26.627264 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:55:26.627272 | orchestrator | 2026-03-24 05:55:26.627278 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:55:26.627285 | orchestrator | Tuesday 24 March 2026 05:54:49 +0000 (0:00:02.235) 1:05:29.961 ********* 2026-03-24 05:55:26.627292 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:55:26.627298 | orchestrator | 2026-03-24 05:55:26.627305 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:55:26.627311 | orchestrator | Tuesday 24 March 2026 05:54:49 +0000 (0:00:00.826) 1:05:30.788 ********* 2026-03-24 05:55:26.627318 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-24 05:55:26.627324 | orchestrator | 2026-03-24 05:55:26.627330 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:55:26.627337 | orchestrator | Tuesday 24 March 2026 05:54:51 +0000 (0:00:01.143) 1:05:31.931 ********* 2026-03-24 05:55:26.627343 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627349 | orchestrator | 2026-03-24 05:55:26.627355 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:55:26.627362 | orchestrator | Tuesday 24 March 2026 05:54:52 +0000 (0:00:01.126) 1:05:33.058 ********* 2026-03-24 05:55:26.627389 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627396 | orchestrator | 2026-03-24 05:55:26.627402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:55:26.627408 | orchestrator | Tuesday 24 March 2026 05:54:53 +0000 (0:00:01.132) 1:05:34.191 ********* 2026-03-24 05:55:26.627414 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627420 | orchestrator | 2026-03-24 05:55:26.627426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:55:26.627433 | orchestrator | Tuesday 24 March 2026 05:54:54 +0000 (0:00:01.127) 1:05:35.319 ********* 2026-03-24 05:55:26.627439 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627445 | orchestrator | 2026-03-24 05:55:26.627451 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:55:26.627457 | orchestrator | Tuesday 24 March 2026 05:54:55 +0000 (0:00:01.181) 1:05:36.501 ********* 2026-03-24 05:55:26.627463 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627469 | orchestrator | 2026-03-24 05:55:26.627476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:55:26.627482 | orchestrator | Tuesday 24 March 2026 05:54:56 +0000 (0:00:01.121) 1:05:37.622 ********* 2026-03-24 05:55:26.627488 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627494 | orchestrator | 2026-03-24 05:55:26.627500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:55:26.627507 | orchestrator | Tuesday 24 March 2026 05:54:57 +0000 (0:00:01.133) 1:05:38.756 ********* 2026-03-24 05:55:26.627513 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627519 | orchestrator | 2026-03-24 05:55:26.627525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:55:26.627531 | orchestrator | Tuesday 24 March 2026 05:54:59 +0000 (0:00:01.161) 1:05:39.917 ********* 2026-03-24 05:55:26.627537 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627543 | orchestrator | 2026-03-24 05:55:26.627550 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:55:26.627556 | orchestrator | Tuesday 24 March 2026 05:55:00 +0000 (0:00:01.105) 1:05:41.023 ********* 2026-03-24 05:55:26.627562 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:55:26.627568 | orchestrator | 2026-03-24 05:55:26.627574 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:55:26.627582 | orchestrator | Tuesday 24 March 2026 05:55:00 +0000 (0:00:00.822) 1:05:41.845 ********* 2026-03-24 05:55:26.627593 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-24 05:55:26.627604 | orchestrator | 2026-03-24 05:55:26.627615 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:55:26.627626 | orchestrator | Tuesday 24 March 2026 05:55:02 +0000 (0:00:01.228) 1:05:43.074 ********* 2026-03-24 05:55:26.627638 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-24 05:55:26.627648 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-24 05:55:26.627697 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-24 05:55:26.627705 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-24 05:55:26.627711 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-24 05:55:26.627717 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-24 05:55:26.627723 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-24 05:55:26.627729 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:55:26.627735 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:55:26.627741 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:55:26.627748 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:55:26.627768 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:55:26.627774 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:55:26.627787 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:55:26.627793 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-24 05:55:26.627800 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-24 05:55:26.627806 | orchestrator | 2026-03-24 05:55:26.627812 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:55:26.627818 | orchestrator | Tuesday 24 March 2026 05:55:08 +0000 (0:00:06.580) 1:05:49.654 ********* 2026-03-24 05:55:26.627824 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-24 05:55:26.627830 | orchestrator | 2026-03-24 05:55:26.627836 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:55:26.627842 | orchestrator | Tuesday 24 March 2026 05:55:09 +0000 (0:00:01.113) 1:05:50.767 ********* 2026-03-24 05:55:26.627848 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:55:26.627856 | orchestrator | 2026-03-24 05:55:26.627862 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:55:26.627868 | orchestrator | Tuesday 24 March 2026 05:55:11 +0000 (0:00:01.534) 1:05:52.302 ********* 2026-03-24 05:55:26.627875 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:55:26.627881 | orchestrator | 2026-03-24 05:55:26.627924 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:55:26.627930 | orchestrator | Tuesday 24 March 2026 05:55:13 +0000 (0:00:01.734) 1:05:54.037 ********* 2026-03-24 05:55:26.627937 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627943 | orchestrator | 2026-03-24 05:55:26.627949 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:55:26.627955 | orchestrator | Tuesday 24 March 2026 05:55:13 +0000 (0:00:00.762) 1:05:54.799 ********* 2026-03-24 05:55:26.627961 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627967 | orchestrator | 2026-03-24 05:55:26.627973 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:55:26.627980 | orchestrator | Tuesday 24 March 2026 05:55:14 +0000 (0:00:00.749) 1:05:55.549 ********* 2026-03-24 05:55:26.627986 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.627992 | orchestrator | 2026-03-24 05:55:26.627998 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:55:26.628004 | orchestrator | Tuesday 24 March 2026 05:55:15 +0000 (0:00:00.784) 1:05:56.334 ********* 2026-03-24 05:55:26.628010 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628016 | orchestrator | 2026-03-24 05:55:26.628022 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:55:26.628028 | orchestrator | Tuesday 24 March 2026 05:55:16 +0000 (0:00:00.780) 1:05:57.115 ********* 2026-03-24 05:55:26.628037 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628044 | orchestrator | 2026-03-24 05:55:26.628050 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:55:26.628056 | orchestrator | Tuesday 24 March 2026 05:55:16 +0000 (0:00:00.756) 1:05:57.871 ********* 2026-03-24 05:55:26.628062 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628068 | orchestrator | 2026-03-24 05:55:26.628074 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:55:26.628081 | orchestrator | Tuesday 24 March 2026 05:55:17 +0000 (0:00:00.761) 1:05:58.632 ********* 2026-03-24 05:55:26.628087 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628093 | orchestrator | 2026-03-24 05:55:26.628099 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:55:26.628105 | orchestrator | Tuesday 24 March 2026 05:55:18 +0000 (0:00:00.750) 1:05:59.383 ********* 2026-03-24 05:55:26.628111 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628122 | orchestrator | 2026-03-24 05:55:26.628128 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:55:26.628134 | orchestrator | Tuesday 24 March 2026 05:55:19 +0000 (0:00:00.780) 1:06:00.164 ********* 2026-03-24 05:55:26.628140 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628146 | orchestrator | 2026-03-24 05:55:26.628152 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:55:26.628158 | orchestrator | Tuesday 24 March 2026 05:55:20 +0000 (0:00:00.767) 1:06:00.932 ********* 2026-03-24 05:55:26.628164 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628170 | orchestrator | 2026-03-24 05:55:26.628176 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:55:26.628183 | orchestrator | Tuesday 24 March 2026 05:55:20 +0000 (0:00:00.783) 1:06:01.716 ********* 2026-03-24 05:55:26.628189 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:55:26.628195 | orchestrator | 2026-03-24 05:55:26.628201 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:55:26.628207 | orchestrator | Tuesday 24 March 2026 05:55:21 +0000 (0:00:00.788) 1:06:02.504 ********* 2026-03-24 05:55:26.628213 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:55:26.628219 | orchestrator | 2026-03-24 05:55:26.628225 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:55:26.628231 | orchestrator | Tuesday 24 March 2026 05:55:25 +0000 (0:00:04.199) 1:06:06.704 ********* 2026-03-24 05:55:26.628237 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:55:26.628243 | orchestrator | 2026-03-24 05:55:26.628254 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:56:06.189157 | orchestrator | Tuesday 24 March 2026 05:55:26 +0000 (0:00:00.810) 1:06:07.515 ********* 2026-03-24 05:56:06.189286 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-24 05:56:06.189306 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-24 05:56:06.189319 | orchestrator | 2026-03-24 05:56:06.189333 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:56:06.189346 | orchestrator | Tuesday 24 March 2026 05:55:31 +0000 (0:00:04.450) 1:06:11.965 ********* 2026-03-24 05:56:06.189358 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189372 | orchestrator | 2026-03-24 05:56:06.189384 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:56:06.189397 | orchestrator | Tuesday 24 March 2026 05:55:31 +0000 (0:00:00.762) 1:06:12.727 ********* 2026-03-24 05:56:06.189408 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189420 | orchestrator | 2026-03-24 05:56:06.189432 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:56:06.189445 | orchestrator | Tuesday 24 March 2026 05:55:32 +0000 (0:00:00.743) 1:06:13.471 ********* 2026-03-24 05:56:06.189457 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189468 | orchestrator | 2026-03-24 05:56:06.189480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:56:06.189492 | orchestrator | Tuesday 24 March 2026 05:55:33 +0000 (0:00:00.779) 1:06:14.250 ********* 2026-03-24 05:56:06.189505 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189516 | orchestrator | 2026-03-24 05:56:06.189559 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:56:06.189573 | orchestrator | Tuesday 24 March 2026 05:55:34 +0000 (0:00:00.766) 1:06:15.016 ********* 2026-03-24 05:56:06.189584 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189595 | orchestrator | 2026-03-24 05:56:06.189606 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:56:06.189617 | orchestrator | Tuesday 24 March 2026 05:55:34 +0000 (0:00:00.747) 1:06:15.765 ********* 2026-03-24 05:56:06.189629 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:56:06.189643 | orchestrator | 2026-03-24 05:56:06.189655 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:56:06.189751 | orchestrator | Tuesday 24 March 2026 05:55:35 +0000 (0:00:00.884) 1:06:16.650 ********* 2026-03-24 05:56:06.189771 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:56:06.189784 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:56:06.189797 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:56:06.189808 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189817 | orchestrator | 2026-03-24 05:56:06.189825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:56:06.189834 | orchestrator | Tuesday 24 March 2026 05:55:36 +0000 (0:00:01.164) 1:06:17.814 ********* 2026-03-24 05:56:06.189842 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:56:06.189851 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:56:06.189859 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:56:06.189868 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189876 | orchestrator | 2026-03-24 05:56:06.189884 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:56:06.189892 | orchestrator | Tuesday 24 March 2026 05:55:37 +0000 (0:00:01.032) 1:06:18.846 ********* 2026-03-24 05:56:06.189901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-24 05:56:06.189909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-24 05:56:06.189917 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-24 05:56:06.189925 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.189933 | orchestrator | 2026-03-24 05:56:06.189941 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:56:06.189950 | orchestrator | Tuesday 24 March 2026 05:55:39 +0000 (0:00:01.068) 1:06:19.915 ********* 2026-03-24 05:56:06.189958 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:56:06.189966 | orchestrator | 2026-03-24 05:56:06.189975 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:56:06.189983 | orchestrator | Tuesday 24 March 2026 05:55:39 +0000 (0:00:00.763) 1:06:20.678 ********* 2026-03-24 05:56:06.189991 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-24 05:56:06.189999 | orchestrator | 2026-03-24 05:56:06.190075 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 05:56:06.190085 | orchestrator | Tuesday 24 March 2026 05:55:40 +0000 (0:00:00.955) 1:06:21.634 ********* 2026-03-24 05:56:06.190094 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:56:06.190102 | orchestrator | 2026-03-24 05:56:06.190110 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-24 05:56:06.190117 | orchestrator | Tuesday 24 March 2026 05:55:42 +0000 (0:00:01.345) 1:06:22.980 ********* 2026-03-24 05:56:06.190124 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-03-24 05:56:06.190132 | orchestrator | 2026-03-24 05:56:06.190158 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 05:56:06.190166 | orchestrator | Tuesday 24 March 2026 05:55:43 +0000 (0:00:01.062) 1:06:24.043 ********* 2026-03-24 05:56:06.190173 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:56:06.190180 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 05:56:06.190199 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:56:06.190206 | orchestrator | 2026-03-24 05:56:06.190213 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:56:06.190220 | orchestrator | Tuesday 24 March 2026 05:55:46 +0000 (0:00:02.931) 1:06:26.974 ********* 2026-03-24 05:56:06.190228 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-24 05:56:06.190235 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-24 05:56:06.190242 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:56:06.190249 | orchestrator | 2026-03-24 05:56:06.190256 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-24 05:56:06.190264 | orchestrator | Tuesday 24 March 2026 05:55:48 +0000 (0:00:01.970) 1:06:28.944 ********* 2026-03-24 05:56:06.190271 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.190278 | orchestrator | 2026-03-24 05:56:06.190286 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-24 05:56:06.190293 | orchestrator | Tuesday 24 March 2026 05:55:48 +0000 (0:00:00.740) 1:06:29.685 ********* 2026-03-24 05:56:06.190300 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-03-24 05:56:06.190308 | orchestrator | 2026-03-24 05:56:06.190315 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-24 05:56:06.190323 | orchestrator | Tuesday 24 March 2026 05:55:49 +0000 (0:00:01.101) 1:06:30.787 ********* 2026-03-24 05:56:06.190330 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:56:06.190339 | orchestrator | 2026-03-24 05:56:06.190346 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-24 05:56:06.190354 | orchestrator | Tuesday 24 March 2026 05:55:51 +0000 (0:00:01.546) 1:06:32.333 ********* 2026-03-24 05:56:06.190361 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:56:06.190368 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-24 05:56:06.190375 | orchestrator | 2026-03-24 05:56:06.190382 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 05:56:06.190390 | orchestrator | Tuesday 24 March 2026 05:55:56 +0000 (0:00:05.181) 1:06:37.515 ********* 2026-03-24 05:56:06.190397 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 05:56:06.190404 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 05:56:06.190411 | orchestrator | 2026-03-24 05:56:06.190423 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 05:56:06.190431 | orchestrator | Tuesday 24 March 2026 05:55:59 +0000 (0:00:03.238) 1:06:40.753 ********* 2026-03-24 05:56:06.190438 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-24 05:56:06.190445 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:56:06.190452 | orchestrator | 2026-03-24 05:56:06.190459 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-24 05:56:06.190466 | orchestrator | Tuesday 24 March 2026 05:56:01 +0000 (0:00:01.693) 1:06:42.446 ********* 2026-03-24 05:56:06.190474 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-03-24 05:56:06.190481 | orchestrator | 2026-03-24 05:56:06.190488 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-24 05:56:06.190495 | orchestrator | Tuesday 24 March 2026 05:56:02 +0000 (0:00:01.147) 1:06:43.594 ********* 2026-03-24 05:56:06.190502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190571 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:56:06.190583 | orchestrator | 2026-03-24 05:56:06.190595 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-24 05:56:06.190607 | orchestrator | Tuesday 24 March 2026 05:56:04 +0000 (0:00:01.589) 1:06:45.184 ********* 2026-03-24 05:56:06.190619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:56:06.190662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:57:13.253431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 05:57:13.253578 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:57:13.253611 | orchestrator | 2026-03-24 05:57:13.253635 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-24 05:57:13.253656 | orchestrator | Tuesday 24 March 2026 05:56:06 +0000 (0:00:01.887) 1:06:47.071 ********* 2026-03-24 05:57:13.253730 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:57:13.253766 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:57:13.253788 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:57:13.253806 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:57:13.253823 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 05:57:13.253839 | orchestrator | 2026-03-24 05:57:13.253858 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-24 05:57:13.253876 | orchestrator | Tuesday 24 March 2026 05:56:38 +0000 (0:00:32.737) 1:07:19.809 ********* 2026-03-24 05:57:13.253894 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:57:13.253912 | orchestrator | 2026-03-24 05:57:13.253930 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-24 05:57:13.253951 | orchestrator | Tuesday 24 March 2026 05:56:39 +0000 (0:00:00.766) 1:07:20.575 ********* 2026-03-24 05:57:13.253972 | orchestrator | skipping: [testbed-node-4] 2026-03-24 05:57:13.253991 | orchestrator | 2026-03-24 05:57:13.254010 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-24 05:57:13.254120 | orchestrator | Tuesday 24 March 2026 05:56:40 +0000 (0:00:00.767) 1:07:21.343 ********* 2026-03-24 05:57:13.254141 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-03-24 05:57:13.254160 | orchestrator | 2026-03-24 05:57:13.254180 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-24 05:57:13.254197 | orchestrator | Tuesday 24 March 2026 05:56:41 +0000 (0:00:01.177) 1:07:22.521 ********* 2026-03-24 05:57:13.254255 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-03-24 05:57:13.254279 | orchestrator | 2026-03-24 05:57:13.254317 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-24 05:57:13.254332 | orchestrator | Tuesday 24 March 2026 05:56:42 +0000 (0:00:01.084) 1:07:23.606 ********* 2026-03-24 05:57:13.254343 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:57:13.254355 | orchestrator | 2026-03-24 05:57:13.254365 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-24 05:57:13.254376 | orchestrator | Tuesday 24 March 2026 05:56:44 +0000 (0:00:02.010) 1:07:25.616 ********* 2026-03-24 05:57:13.254387 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:57:13.254398 | orchestrator | 2026-03-24 05:57:13.254409 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-24 05:57:13.254419 | orchestrator | Tuesday 24 March 2026 05:56:46 +0000 (0:00:02.000) 1:07:27.617 ********* 2026-03-24 05:57:13.254430 | orchestrator | ok: [testbed-node-4] 2026-03-24 05:57:13.254441 | orchestrator | 2026-03-24 05:57:13.254452 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-24 05:57:13.254463 | orchestrator | Tuesday 24 March 2026 05:56:48 +0000 (0:00:02.276) 1:07:29.894 ********* 2026-03-24 05:57:13.254474 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-24 05:57:13.254485 | orchestrator | 2026-03-24 05:57:13.254496 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-24 05:57:13.254507 | orchestrator | 2026-03-24 05:57:13.254517 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 05:57:13.254528 | orchestrator | Tuesday 24 March 2026 05:56:51 +0000 (0:00:02.874) 1:07:32.768 ********* 2026-03-24 05:57:13.254539 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-24 05:57:13.254549 | orchestrator | 2026-03-24 05:57:13.254598 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-24 05:57:13.254632 | orchestrator | Tuesday 24 March 2026 05:56:52 +0000 (0:00:01.085) 1:07:33.854 ********* 2026-03-24 05:57:13.254644 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.254654 | orchestrator | 2026-03-24 05:57:13.254665 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-24 05:57:13.254676 | orchestrator | Tuesday 24 March 2026 05:56:54 +0000 (0:00:01.473) 1:07:35.328 ********* 2026-03-24 05:57:13.254686 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.254697 | orchestrator | 2026-03-24 05:57:13.254733 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 05:57:13.254744 | orchestrator | Tuesday 24 March 2026 05:56:55 +0000 (0:00:01.152) 1:07:36.480 ********* 2026-03-24 05:57:13.254755 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.254765 | orchestrator | 2026-03-24 05:57:13.254776 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 05:57:13.254787 | orchestrator | Tuesday 24 March 2026 05:56:57 +0000 (0:00:01.422) 1:07:37.902 ********* 2026-03-24 05:57:13.254798 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.254809 | orchestrator | 2026-03-24 05:57:13.254841 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-24 05:57:13.254853 | orchestrator | Tuesday 24 March 2026 05:56:58 +0000 (0:00:01.132) 1:07:39.035 ********* 2026-03-24 05:57:13.254864 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.254875 | orchestrator | 2026-03-24 05:57:13.254886 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-24 05:57:13.254897 | orchestrator | Tuesday 24 March 2026 05:56:59 +0000 (0:00:01.138) 1:07:40.174 ********* 2026-03-24 05:57:13.254908 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.254919 | orchestrator | 2026-03-24 05:57:13.254930 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-24 05:57:13.254941 | orchestrator | Tuesday 24 March 2026 05:57:00 +0000 (0:00:01.153) 1:07:41.327 ********* 2026-03-24 05:57:13.254962 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:13.254973 | orchestrator | 2026-03-24 05:57:13.255000 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-24 05:57:13.255012 | orchestrator | Tuesday 24 March 2026 05:57:01 +0000 (0:00:01.145) 1:07:42.473 ********* 2026-03-24 05:57:13.255034 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.255045 | orchestrator | 2026-03-24 05:57:13.255055 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-24 05:57:13.255066 | orchestrator | Tuesday 24 March 2026 05:57:02 +0000 (0:00:01.133) 1:07:43.606 ********* 2026-03-24 05:57:13.255077 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:57:13.255088 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:57:13.255099 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:57:13.255110 | orchestrator | 2026-03-24 05:57:13.255120 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-24 05:57:13.255131 | orchestrator | Tuesday 24 March 2026 05:57:04 +0000 (0:00:01.665) 1:07:45.272 ********* 2026-03-24 05:57:13.255142 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:13.255153 | orchestrator | 2026-03-24 05:57:13.255164 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-24 05:57:13.255174 | orchestrator | Tuesday 24 March 2026 05:57:05 +0000 (0:00:01.246) 1:07:46.518 ********* 2026-03-24 05:57:13.255185 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:57:13.255196 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:57:13.255207 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:57:13.255218 | orchestrator | 2026-03-24 05:57:13.255229 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-24 05:57:13.255239 | orchestrator | Tuesday 24 March 2026 05:57:08 +0000 (0:00:03.166) 1:07:49.685 ********* 2026-03-24 05:57:13.255250 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 05:57:13.255267 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 05:57:13.255278 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 05:57:13.255289 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:13.255300 | orchestrator | 2026-03-24 05:57:13.255310 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-24 05:57:13.255321 | orchestrator | Tuesday 24 March 2026 05:57:10 +0000 (0:00:01.398) 1:07:51.083 ********* 2026-03-24 05:57:13.255334 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-24 05:57:13.255349 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-24 05:57:13.255360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-24 05:57:13.255371 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:13.255382 | orchestrator | 2026-03-24 05:57:13.255393 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-24 05:57:13.255404 | orchestrator | Tuesday 24 March 2026 05:57:12 +0000 (0:00:01.913) 1:07:52.997 ********* 2026-03-24 05:57:13.255418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:13.255447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:31.833238 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:31.833377 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.833402 | orchestrator | 2026-03-24 05:57:31.833422 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-24 05:57:31.833442 | orchestrator | Tuesday 24 March 2026 05:57:13 +0000 (0:00:01.145) 1:07:54.143 ********* 2026-03-24 05:57:31.833463 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '9a2a8fe1a295', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-24 05:57:06.125059', 'end': '2026-03-24 05:57:06.166637', 'delta': '0:00:00.041578', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a2a8fe1a295'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-24 05:57:31.833503 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c50257445160', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-24 05:57:06.702566', 'end': '2026-03-24 05:57:06.748600', 'delta': '0:00:00.046034', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c50257445160'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-24 05:57:31.833523 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '3e51a16c9e51', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-24 05:57:07.576685', 'end': '2026-03-24 05:57:07.625713', 'delta': '0:00:00.049028', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3e51a16c9e51'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-24 05:57:31.833541 | orchestrator | 2026-03-24 05:57:31.833560 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-24 05:57:31.833577 | orchestrator | Tuesday 24 March 2026 05:57:14 +0000 (0:00:01.197) 1:07:55.340 ********* 2026-03-24 05:57:31.833621 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:31.833640 | orchestrator | 2026-03-24 05:57:31.833658 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-24 05:57:31.833676 | orchestrator | Tuesday 24 March 2026 05:57:15 +0000 (0:00:01.248) 1:07:56.589 ********* 2026-03-24 05:57:31.833693 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.833711 | orchestrator | 2026-03-24 05:57:31.833759 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-24 05:57:31.833776 | orchestrator | Tuesday 24 March 2026 05:57:16 +0000 (0:00:01.245) 1:07:57.834 ********* 2026-03-24 05:57:31.833792 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:31.833810 | orchestrator | 2026-03-24 05:57:31.833829 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-24 05:57:31.833847 | orchestrator | Tuesday 24 March 2026 05:57:18 +0000 (0:00:01.109) 1:07:58.944 ********* 2026-03-24 05:57:31.833866 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-24 05:57:31.833884 | orchestrator | 2026-03-24 05:57:31.833902 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:57:31.833921 | orchestrator | Tuesday 24 March 2026 05:57:19 +0000 (0:00:01.956) 1:08:00.900 ********* 2026-03-24 05:57:31.833939 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:31.833958 | orchestrator | 2026-03-24 05:57:31.833976 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-24 05:57:31.833996 | orchestrator | Tuesday 24 March 2026 05:57:21 +0000 (0:00:01.123) 1:08:02.024 ********* 2026-03-24 05:57:31.834102 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.834128 | orchestrator | 2026-03-24 05:57:31.834146 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-24 05:57:31.834164 | orchestrator | Tuesday 24 March 2026 05:57:22 +0000 (0:00:01.114) 1:08:03.138 ********* 2026-03-24 05:57:31.834181 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.834199 | orchestrator | 2026-03-24 05:57:31.834216 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-24 05:57:31.834234 | orchestrator | Tuesday 24 March 2026 05:57:23 +0000 (0:00:01.236) 1:08:04.374 ********* 2026-03-24 05:57:31.834252 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.834270 | orchestrator | 2026-03-24 05:57:31.834287 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-24 05:57:31.834305 | orchestrator | Tuesday 24 March 2026 05:57:24 +0000 (0:00:01.131) 1:08:05.506 ********* 2026-03-24 05:57:31.834322 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.834340 | orchestrator | 2026-03-24 05:57:31.834357 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-24 05:57:31.834374 | orchestrator | Tuesday 24 March 2026 05:57:25 +0000 (0:00:01.126) 1:08:06.633 ********* 2026-03-24 05:57:31.834392 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:31.834409 | orchestrator | 2026-03-24 05:57:31.834426 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-24 05:57:31.834444 | orchestrator | Tuesday 24 March 2026 05:57:26 +0000 (0:00:01.156) 1:08:07.789 ********* 2026-03-24 05:57:31.834461 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.834478 | orchestrator | 2026-03-24 05:57:31.834496 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-24 05:57:31.834513 | orchestrator | Tuesday 24 March 2026 05:57:27 +0000 (0:00:01.108) 1:08:08.897 ********* 2026-03-24 05:57:31.834531 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:31.834549 | orchestrator | 2026-03-24 05:57:31.834567 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-24 05:57:31.834585 | orchestrator | Tuesday 24 March 2026 05:57:29 +0000 (0:00:01.178) 1:08:10.075 ********* 2026-03-24 05:57:31.834602 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:31.834620 | orchestrator | 2026-03-24 05:57:31.834637 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-24 05:57:31.834656 | orchestrator | Tuesday 24 March 2026 05:57:30 +0000 (0:00:01.187) 1:08:11.263 ********* 2026-03-24 05:57:31.834687 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:31.834705 | orchestrator | 2026-03-24 05:57:31.834741 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-24 05:57:31.834757 | orchestrator | Tuesday 24 March 2026 05:57:31 +0000 (0:00:01.193) 1:08:12.456 ********* 2026-03-24 05:57:31.834782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:31.834801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}})  2026-03-24 05:57:31.834820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:57:31.834849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}})  2026-03-24 05:57:32.945154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-24 05:57:32.945330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}})  2026-03-24 05:57:32.945415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}})  2026-03-24 05:57:32.945429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-24 05:57:32.945473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-24 05:57:32.945504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-24 05:57:33.184212 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:33.184303 | orchestrator | 2026-03-24 05:57:33.184316 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-24 05:57:33.184326 | orchestrator | Tuesday 24 March 2026 05:57:32 +0000 (0:00:01.381) 1:08:13.838 ********* 2026-03-24 05:57:33.184338 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184384 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59', 'dm-uuid-LVM-fqEcBb5BL4E0RT3mYurPtn5jN7LMS8Or0tCJ9maSE6s91fkR95dcCTUWFqk9Kxe4'], 'uuids': ['37d3be03-52e4-42ec-a3b4-48d6e6f02ec4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184397 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a', 'scsi-SQEMU_QEMU_HARDDISK_b1c01c59-5cc3-4efd-b762-ef9b36f8e82a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b1c01c59', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iTCEGw-oi0C-Gvnx-OdwE-T1GL-hEW7-H1pElQ', 'scsi-0QEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5', 'scsi-SQEMU_QEMU_HARDDISK_637e3c3b-1b7c-4875-ba1f-929ede49b5d5'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184437 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184465 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-24-01-35-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA', 'dm-uuid-CRYPT-LUKS2-f7a38ad6fb8a47e49b12a27889e2fccd-Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184498 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:33.184514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7dc39596--c9fc--583d--89f8--392d010fb80f-osd--block--7dc39596--c9fc--583d--89f8--392d010fb80f', 'dm-uuid-LVM-ENjDwotGBm0Apik7UXVG1pfQNOiVYV7jQj6C78CBr57fkmNoZoJZXS4k5KmYHsUA'], 'uuids': ['f7a38ad6-fb8a-47e4-9b12-a27889e2fccd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '637e3c3b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Qj6C78-CBr5-7fkm-NoZo-JZXS-4k5K-mYHsUA']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:46.407032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-JY1vGN-PzQb-Q8f9-ckMt-6JLy-xmjt-DCDIjV', 'scsi-0QEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e', 'scsi-SQEMU_QEMU_HARDDISK_69b3fd8b-3b41-44d2-abc9-ba13d6107c6e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '69b3fd8b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7e9350b0--7da1--52b7--a847--2b8ea41c8f59-osd--block--7e9350b0--7da1--52b7--a847--2b8ea41c8f59']}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:46.407211 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:46.407231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8862b49e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1', 'scsi-SQEMU_QEMU_HARDDISK_8862b49e-6192-4e89-91ad-23c351a2afe9-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:46.407297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:46.407312 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:46.407330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4', 'dm-uuid-CRYPT-LUKS2-37d3be0352e442eca3b448d6e6f02ec4-0tCJ9m-aSE6-s91f-kR95-dcCT-UWFq-k9Kxe4'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-24 05:57:46.407343 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:46.407357 | orchestrator | 2026-03-24 05:57:46.407369 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-24 05:57:46.407382 | orchestrator | Tuesday 24 March 2026 05:57:34 +0000 (0:00:01.427) 1:08:15.266 ********* 2026-03-24 05:57:46.407392 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:46.407404 | orchestrator | 2026-03-24 05:57:46.407415 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-24 05:57:46.407426 | orchestrator | Tuesday 24 March 2026 05:57:35 +0000 (0:00:01.578) 1:08:16.845 ********* 2026-03-24 05:57:46.407437 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:46.407448 | orchestrator | 2026-03-24 05:57:46.407459 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:57:46.407469 | orchestrator | Tuesday 24 March 2026 05:57:37 +0000 (0:00:01.110) 1:08:17.956 ********* 2026-03-24 05:57:46.407480 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:57:46.407491 | orchestrator | 2026-03-24 05:57:46.407502 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:57:46.407512 | orchestrator | Tuesday 24 March 2026 05:57:38 +0000 (0:00:01.482) 1:08:19.438 ********* 2026-03-24 05:57:46.407523 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:46.407536 | orchestrator | 2026-03-24 05:57:46.407549 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-24 05:57:46.407562 | orchestrator | Tuesday 24 March 2026 05:57:39 +0000 (0:00:01.124) 1:08:20.563 ********* 2026-03-24 05:57:46.407574 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:46.407587 | orchestrator | 2026-03-24 05:57:46.407599 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-24 05:57:46.407611 | orchestrator | Tuesday 24 March 2026 05:57:40 +0000 (0:00:01.222) 1:08:21.786 ********* 2026-03-24 05:57:46.407624 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:46.407636 | orchestrator | 2026-03-24 05:57:46.407656 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-24 05:57:46.407669 | orchestrator | Tuesday 24 March 2026 05:57:42 +0000 (0:00:01.150) 1:08:22.936 ********* 2026-03-24 05:57:46.407682 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-24 05:57:46.407695 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-24 05:57:46.407707 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-24 05:57:46.407747 | orchestrator | 2026-03-24 05:57:46.407768 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-24 05:57:46.407785 | orchestrator | Tuesday 24 March 2026 05:57:43 +0000 (0:00:01.949) 1:08:24.886 ********* 2026-03-24 05:57:46.407804 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-24 05:57:46.407824 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-24 05:57:46.407844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-24 05:57:46.407863 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:57:46.407880 | orchestrator | 2026-03-24 05:57:46.407893 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-24 05:57:46.407904 | orchestrator | Tuesday 24 March 2026 05:57:45 +0000 (0:00:01.165) 1:08:26.052 ********* 2026-03-24 05:57:46.407915 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-24 05:57:46.407927 | orchestrator | 2026-03-24 05:57:46.407947 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:58:27.702358 | orchestrator | Tuesday 24 March 2026 05:57:46 +0000 (0:00:01.243) 1:08:27.295 ********* 2026-03-24 05:58:27.702511 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.702530 | orchestrator | 2026-03-24 05:58:27.702544 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:58:27.702556 | orchestrator | Tuesday 24 March 2026 05:57:47 +0000 (0:00:01.121) 1:08:28.416 ********* 2026-03-24 05:58:27.702567 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.702579 | orchestrator | 2026-03-24 05:58:27.702590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 05:58:27.702602 | orchestrator | Tuesday 24 March 2026 05:57:48 +0000 (0:00:01.131) 1:08:29.548 ********* 2026-03-24 05:58:27.702613 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.702624 | orchestrator | 2026-03-24 05:58:27.702635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 05:58:27.702646 | orchestrator | Tuesday 24 March 2026 05:57:49 +0000 (0:00:01.116) 1:08:30.665 ********* 2026-03-24 05:58:27.702658 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.702670 | orchestrator | 2026-03-24 05:58:27.702681 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 05:58:27.702692 | orchestrator | Tuesday 24 March 2026 05:57:50 +0000 (0:00:01.191) 1:08:31.856 ********* 2026-03-24 05:58:27.702703 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:58:27.702715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:58:27.702726 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:58:27.702773 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.702787 | orchestrator | 2026-03-24 05:58:27.702798 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 05:58:27.702809 | orchestrator | Tuesday 24 March 2026 05:57:52 +0000 (0:00:01.383) 1:08:33.240 ********* 2026-03-24 05:58:27.702842 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:58:27.702853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:58:27.702873 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:58:27.702886 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.702898 | orchestrator | 2026-03-24 05:58:27.702911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 05:58:27.702954 | orchestrator | Tuesday 24 March 2026 05:57:53 +0000 (0:00:01.342) 1:08:34.583 ********* 2026-03-24 05:58:27.702967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 05:58:27.702979 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 05:58:27.702993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 05:58:27.703005 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.703016 | orchestrator | 2026-03-24 05:58:27.703027 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 05:58:27.703037 | orchestrator | Tuesday 24 March 2026 05:57:55 +0000 (0:00:01.433) 1:08:36.016 ********* 2026-03-24 05:58:27.703048 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.703059 | orchestrator | 2026-03-24 05:58:27.703070 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 05:58:27.703080 | orchestrator | Tuesday 24 March 2026 05:57:56 +0000 (0:00:01.127) 1:08:37.143 ********* 2026-03-24 05:58:27.703091 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 05:58:27.703102 | orchestrator | 2026-03-24 05:58:27.703113 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-24 05:58:27.703123 | orchestrator | Tuesday 24 March 2026 05:57:57 +0000 (0:00:01.253) 1:08:38.397 ********* 2026-03-24 05:58:27.703134 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:58:27.703146 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:58:27.703157 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:58:27.703167 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:58:27.703178 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:58:27.703189 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-24 05:58:27.703199 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:58:27.703210 | orchestrator | 2026-03-24 05:58:27.703221 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-24 05:58:27.703232 | orchestrator | Tuesday 24 March 2026 05:57:59 +0000 (0:00:01.926) 1:08:40.323 ********* 2026-03-24 05:58:27.703242 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-24 05:58:27.703253 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-24 05:58:27.703263 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-24 05:58:27.703275 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-24 05:58:27.703285 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-24 05:58:27.703296 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-24 05:58:27.703307 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-24 05:58:27.703317 | orchestrator | 2026-03-24 05:58:27.703328 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-24 05:58:27.703339 | orchestrator | Tuesday 24 March 2026 05:58:01 +0000 (0:00:02.007) 1:08:42.331 ********* 2026-03-24 05:58:27.703350 | orchestrator | changed: [testbed-node-5] 2026-03-24 05:58:27.703360 | orchestrator | 2026-03-24 05:58:27.703390 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-24 05:58:27.703402 | orchestrator | Tuesday 24 March 2026 05:58:03 +0000 (0:00:01.982) 1:08:44.314 ********* 2026-03-24 05:58:27.703414 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:58:27.703426 | orchestrator | 2026-03-24 05:58:27.703437 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-24 05:58:27.703457 | orchestrator | Tuesday 24 March 2026 05:58:06 +0000 (0:00:02.607) 1:08:46.921 ********* 2026-03-24 05:58:27.703469 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:58:27.703479 | orchestrator | 2026-03-24 05:58:27.703490 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 05:58:27.703501 | orchestrator | Tuesday 24 March 2026 05:58:08 +0000 (0:00:02.017) 1:08:48.939 ********* 2026-03-24 05:58:27.703512 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-24 05:58:27.703523 | orchestrator | 2026-03-24 05:58:27.703534 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 05:58:27.703544 | orchestrator | Tuesday 24 March 2026 05:58:09 +0000 (0:00:01.113) 1:08:50.052 ********* 2026-03-24 05:58:27.703555 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-24 05:58:27.703566 | orchestrator | 2026-03-24 05:58:27.703577 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 05:58:27.703588 | orchestrator | Tuesday 24 March 2026 05:58:10 +0000 (0:00:01.146) 1:08:51.199 ********* 2026-03-24 05:58:27.703599 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.703610 | orchestrator | 2026-03-24 05:58:27.703627 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 05:58:27.703638 | orchestrator | Tuesday 24 March 2026 05:58:11 +0000 (0:00:01.152) 1:08:52.352 ********* 2026-03-24 05:58:27.703655 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.703673 | orchestrator | 2026-03-24 05:58:27.703692 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 05:58:27.703709 | orchestrator | Tuesday 24 March 2026 05:58:12 +0000 (0:00:01.507) 1:08:53.860 ********* 2026-03-24 05:58:27.703726 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.703766 | orchestrator | 2026-03-24 05:58:27.703786 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 05:58:27.703805 | orchestrator | Tuesday 24 March 2026 05:58:14 +0000 (0:00:01.508) 1:08:55.368 ********* 2026-03-24 05:58:27.703825 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.703844 | orchestrator | 2026-03-24 05:58:27.703862 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 05:58:27.703880 | orchestrator | Tuesday 24 March 2026 05:58:16 +0000 (0:00:01.576) 1:08:56.945 ********* 2026-03-24 05:58:27.703897 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.703914 | orchestrator | 2026-03-24 05:58:27.703929 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 05:58:27.703945 | orchestrator | Tuesday 24 March 2026 05:58:17 +0000 (0:00:01.106) 1:08:58.052 ********* 2026-03-24 05:58:27.703961 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.703977 | orchestrator | 2026-03-24 05:58:27.703993 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 05:58:27.704010 | orchestrator | Tuesday 24 March 2026 05:58:18 +0000 (0:00:01.110) 1:08:59.162 ********* 2026-03-24 05:58:27.704027 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.704045 | orchestrator | 2026-03-24 05:58:27.704062 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 05:58:27.704080 | orchestrator | Tuesday 24 March 2026 05:58:19 +0000 (0:00:01.134) 1:09:00.297 ********* 2026-03-24 05:58:27.704098 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.704117 | orchestrator | 2026-03-24 05:58:27.704134 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 05:58:27.704152 | orchestrator | Tuesday 24 March 2026 05:58:21 +0000 (0:00:02.016) 1:09:02.313 ********* 2026-03-24 05:58:27.704170 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.704190 | orchestrator | 2026-03-24 05:58:27.704207 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 05:58:27.704227 | orchestrator | Tuesday 24 March 2026 05:58:22 +0000 (0:00:01.519) 1:09:03.833 ********* 2026-03-24 05:58:27.704250 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.704262 | orchestrator | 2026-03-24 05:58:27.704272 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 05:58:27.704283 | orchestrator | Tuesday 24 March 2026 05:58:23 +0000 (0:00:00.773) 1:09:04.606 ********* 2026-03-24 05:58:27.704294 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.704305 | orchestrator | 2026-03-24 05:58:27.704315 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 05:58:27.704326 | orchestrator | Tuesday 24 March 2026 05:58:24 +0000 (0:00:00.784) 1:09:05.390 ********* 2026-03-24 05:58:27.704343 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.704362 | orchestrator | 2026-03-24 05:58:27.704380 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 05:58:27.704398 | orchestrator | Tuesday 24 March 2026 05:58:25 +0000 (0:00:00.805) 1:09:06.196 ********* 2026-03-24 05:58:27.704415 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.704433 | orchestrator | 2026-03-24 05:58:27.704451 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 05:58:27.704469 | orchestrator | Tuesday 24 March 2026 05:58:26 +0000 (0:00:00.795) 1:09:06.992 ********* 2026-03-24 05:58:27.704487 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:58:27.704505 | orchestrator | 2026-03-24 05:58:27.704524 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 05:58:27.704543 | orchestrator | Tuesday 24 March 2026 05:58:26 +0000 (0:00:00.828) 1:09:07.820 ********* 2026-03-24 05:58:27.704561 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:58:27.704572 | orchestrator | 2026-03-24 05:58:27.704597 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 05:59:07.704678 | orchestrator | Tuesday 24 March 2026 05:58:27 +0000 (0:00:00.766) 1:09:08.587 ********* 2026-03-24 05:59:07.704856 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.704875 | orchestrator | 2026-03-24 05:59:07.704889 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 05:59:07.704901 | orchestrator | Tuesday 24 March 2026 05:58:28 +0000 (0:00:00.787) 1:09:09.375 ********* 2026-03-24 05:59:07.704912 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.704923 | orchestrator | 2026-03-24 05:59:07.704934 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 05:59:07.704945 | orchestrator | Tuesday 24 March 2026 05:58:29 +0000 (0:00:00.777) 1:09:10.152 ********* 2026-03-24 05:59:07.704956 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.704968 | orchestrator | 2026-03-24 05:59:07.704979 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 05:59:07.704990 | orchestrator | Tuesday 24 March 2026 05:58:30 +0000 (0:00:00.823) 1:09:10.975 ********* 2026-03-24 05:59:07.705001 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.705012 | orchestrator | 2026-03-24 05:59:07.705023 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-24 05:59:07.705034 | orchestrator | Tuesday 24 March 2026 05:58:30 +0000 (0:00:00.796) 1:09:11.772 ********* 2026-03-24 05:59:07.705045 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705055 | orchestrator | 2026-03-24 05:59:07.705066 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-24 05:59:07.705077 | orchestrator | Tuesday 24 March 2026 05:58:31 +0000 (0:00:00.816) 1:09:12.588 ********* 2026-03-24 05:59:07.705088 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705098 | orchestrator | 2026-03-24 05:59:07.705109 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-24 05:59:07.705120 | orchestrator | Tuesday 24 March 2026 05:58:32 +0000 (0:00:00.761) 1:09:13.350 ********* 2026-03-24 05:59:07.705131 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705142 | orchestrator | 2026-03-24 05:59:07.705169 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-24 05:59:07.705180 | orchestrator | Tuesday 24 March 2026 05:58:33 +0000 (0:00:00.750) 1:09:14.100 ********* 2026-03-24 05:59:07.705214 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705227 | orchestrator | 2026-03-24 05:59:07.705240 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-24 05:59:07.705253 | orchestrator | Tuesday 24 March 2026 05:58:33 +0000 (0:00:00.746) 1:09:14.847 ********* 2026-03-24 05:59:07.705266 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705278 | orchestrator | 2026-03-24 05:59:07.705291 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-24 05:59:07.705304 | orchestrator | Tuesday 24 March 2026 05:58:34 +0000 (0:00:00.745) 1:09:15.592 ********* 2026-03-24 05:59:07.705316 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705329 | orchestrator | 2026-03-24 05:59:07.705341 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-24 05:59:07.705353 | orchestrator | Tuesday 24 March 2026 05:58:35 +0000 (0:00:00.810) 1:09:16.403 ********* 2026-03-24 05:59:07.705366 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705378 | orchestrator | 2026-03-24 05:59:07.705391 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-24 05:59:07.705403 | orchestrator | Tuesday 24 March 2026 05:58:36 +0000 (0:00:00.752) 1:09:17.156 ********* 2026-03-24 05:59:07.705416 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705428 | orchestrator | 2026-03-24 05:59:07.705447 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-24 05:59:07.705464 | orchestrator | Tuesday 24 March 2026 05:58:37 +0000 (0:00:00.766) 1:09:17.923 ********* 2026-03-24 05:59:07.705477 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705490 | orchestrator | 2026-03-24 05:59:07.705503 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-24 05:59:07.705514 | orchestrator | Tuesday 24 March 2026 05:58:37 +0000 (0:00:00.764) 1:09:18.687 ********* 2026-03-24 05:59:07.705525 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705536 | orchestrator | 2026-03-24 05:59:07.705547 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-24 05:59:07.705557 | orchestrator | Tuesday 24 March 2026 05:58:38 +0000 (0:00:00.755) 1:09:19.442 ********* 2026-03-24 05:59:07.705568 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705579 | orchestrator | 2026-03-24 05:59:07.705590 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-24 05:59:07.705600 | orchestrator | Tuesday 24 March 2026 05:58:39 +0000 (0:00:00.779) 1:09:20.222 ********* 2026-03-24 05:59:07.705611 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705621 | orchestrator | 2026-03-24 05:59:07.705632 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-24 05:59:07.705642 | orchestrator | Tuesday 24 March 2026 05:58:40 +0000 (0:00:00.774) 1:09:20.996 ********* 2026-03-24 05:59:07.705653 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.705664 | orchestrator | 2026-03-24 05:59:07.705675 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-24 05:59:07.705685 | orchestrator | Tuesday 24 March 2026 05:58:41 +0000 (0:00:01.606) 1:09:22.603 ********* 2026-03-24 05:59:07.705696 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.705707 | orchestrator | 2026-03-24 05:59:07.705717 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-24 05:59:07.705728 | orchestrator | Tuesday 24 March 2026 05:58:43 +0000 (0:00:01.863) 1:09:24.466 ********* 2026-03-24 05:59:07.705739 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-24 05:59:07.705751 | orchestrator | 2026-03-24 05:59:07.705792 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-24 05:59:07.705804 | orchestrator | Tuesday 24 March 2026 05:58:44 +0000 (0:00:01.103) 1:09:25.570 ********* 2026-03-24 05:59:07.705815 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705826 | orchestrator | 2026-03-24 05:59:07.705837 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-24 05:59:07.705866 | orchestrator | Tuesday 24 March 2026 05:58:45 +0000 (0:00:01.174) 1:09:26.745 ********* 2026-03-24 05:59:07.705886 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.705898 | orchestrator | 2026-03-24 05:59:07.705909 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-24 05:59:07.705920 | orchestrator | Tuesday 24 March 2026 05:58:46 +0000 (0:00:01.102) 1:09:27.847 ********* 2026-03-24 05:59:07.705931 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-24 05:59:07.705942 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-24 05:59:07.705953 | orchestrator | 2026-03-24 05:59:07.705964 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-24 05:59:07.705975 | orchestrator | Tuesday 24 March 2026 05:58:48 +0000 (0:00:01.798) 1:09:29.646 ********* 2026-03-24 05:59:07.705986 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.705996 | orchestrator | 2026-03-24 05:59:07.706007 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-24 05:59:07.706080 | orchestrator | Tuesday 24 March 2026 05:58:50 +0000 (0:00:01.456) 1:09:31.103 ********* 2026-03-24 05:59:07.706093 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706104 | orchestrator | 2026-03-24 05:59:07.706115 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-24 05:59:07.706126 | orchestrator | Tuesday 24 March 2026 05:58:51 +0000 (0:00:01.156) 1:09:32.260 ********* 2026-03-24 05:59:07.706136 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706147 | orchestrator | 2026-03-24 05:59:07.706158 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-24 05:59:07.706169 | orchestrator | Tuesday 24 March 2026 05:58:52 +0000 (0:00:00.804) 1:09:33.065 ********* 2026-03-24 05:59:07.706179 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706190 | orchestrator | 2026-03-24 05:59:07.706201 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-24 05:59:07.706218 | orchestrator | Tuesday 24 March 2026 05:58:52 +0000 (0:00:00.755) 1:09:33.820 ********* 2026-03-24 05:59:07.706229 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-24 05:59:07.706240 | orchestrator | 2026-03-24 05:59:07.706251 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-24 05:59:07.706262 | orchestrator | Tuesday 24 March 2026 05:58:54 +0000 (0:00:01.121) 1:09:34.942 ********* 2026-03-24 05:59:07.706273 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.706284 | orchestrator | 2026-03-24 05:59:07.706294 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-24 05:59:07.706305 | orchestrator | Tuesday 24 March 2026 05:58:55 +0000 (0:00:01.800) 1:09:36.743 ********* 2026-03-24 05:59:07.706316 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-24 05:59:07.706327 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-24 05:59:07.706338 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-24 05:59:07.706348 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706359 | orchestrator | 2026-03-24 05:59:07.706370 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-24 05:59:07.706381 | orchestrator | Tuesday 24 March 2026 05:58:56 +0000 (0:00:01.150) 1:09:37.893 ********* 2026-03-24 05:59:07.706391 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706402 | orchestrator | 2026-03-24 05:59:07.706413 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-24 05:59:07.706424 | orchestrator | Tuesday 24 March 2026 05:58:58 +0000 (0:00:01.183) 1:09:39.076 ********* 2026-03-24 05:59:07.706434 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706445 | orchestrator | 2026-03-24 05:59:07.706456 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-24 05:59:07.706467 | orchestrator | Tuesday 24 March 2026 05:58:59 +0000 (0:00:01.163) 1:09:40.240 ********* 2026-03-24 05:59:07.706485 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706495 | orchestrator | 2026-03-24 05:59:07.706506 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-24 05:59:07.706517 | orchestrator | Tuesday 24 March 2026 05:59:00 +0000 (0:00:01.257) 1:09:41.498 ********* 2026-03-24 05:59:07.706528 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706539 | orchestrator | 2026-03-24 05:59:07.706549 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-24 05:59:07.706560 | orchestrator | Tuesday 24 March 2026 05:59:01 +0000 (0:00:01.169) 1:09:42.667 ********* 2026-03-24 05:59:07.706571 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706582 | orchestrator | 2026-03-24 05:59:07.706592 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-24 05:59:07.706603 | orchestrator | Tuesday 24 March 2026 05:59:02 +0000 (0:00:00.786) 1:09:43.454 ********* 2026-03-24 05:59:07.706614 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.706625 | orchestrator | 2026-03-24 05:59:07.706635 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-24 05:59:07.706646 | orchestrator | Tuesday 24 March 2026 05:59:04 +0000 (0:00:02.092) 1:09:45.547 ********* 2026-03-24 05:59:07.706657 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:07.706668 | orchestrator | 2026-03-24 05:59:07.706679 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-24 05:59:07.706689 | orchestrator | Tuesday 24 March 2026 05:59:05 +0000 (0:00:00.831) 1:09:46.378 ********* 2026-03-24 05:59:07.706700 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-24 05:59:07.706711 | orchestrator | 2026-03-24 05:59:07.706722 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-24 05:59:07.706732 | orchestrator | Tuesday 24 March 2026 05:59:06 +0000 (0:00:01.091) 1:09:47.470 ********* 2026-03-24 05:59:07.706743 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:07.706776 | orchestrator | 2026-03-24 05:59:07.706789 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-24 05:59:07.706808 | orchestrator | Tuesday 24 March 2026 05:59:07 +0000 (0:00:01.120) 1:09:48.590 ********* 2026-03-24 05:59:49.028225 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.028334 | orchestrator | 2026-03-24 05:59:49.028349 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-24 05:59:49.028359 | orchestrator | Tuesday 24 March 2026 05:59:08 +0000 (0:00:01.147) 1:09:49.738 ********* 2026-03-24 05:59:49.028368 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.028376 | orchestrator | 2026-03-24 05:59:49.028384 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-24 05:59:49.028393 | orchestrator | Tuesday 24 March 2026 05:59:09 +0000 (0:00:01.143) 1:09:50.881 ********* 2026-03-24 05:59:49.028401 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.028409 | orchestrator | 2026-03-24 05:59:49.028418 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-24 05:59:49.028426 | orchestrator | Tuesday 24 March 2026 05:59:11 +0000 (0:00:01.177) 1:09:52.058 ********* 2026-03-24 05:59:49.028434 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.028442 | orchestrator | 2026-03-24 05:59:49.028450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-24 05:59:49.028458 | orchestrator | Tuesday 24 March 2026 05:59:12 +0000 (0:00:01.107) 1:09:53.166 ********* 2026-03-24 05:59:49.028465 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.028473 | orchestrator | 2026-03-24 05:59:49.028481 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-24 05:59:49.028489 | orchestrator | Tuesday 24 March 2026 05:59:13 +0000 (0:00:01.212) 1:09:54.378 ********* 2026-03-24 05:59:49.028497 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.028505 | orchestrator | 2026-03-24 05:59:49.028513 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-24 05:59:49.028547 | orchestrator | Tuesday 24 March 2026 05:59:14 +0000 (0:00:01.155) 1:09:55.534 ********* 2026-03-24 05:59:49.028561 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.028581 | orchestrator | 2026-03-24 05:59:49.028612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-24 05:59:49.028626 | orchestrator | Tuesday 24 March 2026 05:59:15 +0000 (0:00:01.165) 1:09:56.699 ********* 2026-03-24 05:59:49.028640 | orchestrator | ok: [testbed-node-5] 2026-03-24 05:59:49.028653 | orchestrator | 2026-03-24 05:59:49.028667 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-24 05:59:49.028680 | orchestrator | Tuesday 24 March 2026 05:59:16 +0000 (0:00:00.772) 1:09:57.472 ********* 2026-03-24 05:59:49.028694 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-24 05:59:49.028709 | orchestrator | 2026-03-24 05:59:49.028720 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-24 05:59:49.028733 | orchestrator | Tuesday 24 March 2026 05:59:17 +0000 (0:00:01.135) 1:09:58.607 ********* 2026-03-24 05:59:49.028747 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-24 05:59:49.028762 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-24 05:59:49.028803 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-24 05:59:49.028817 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-24 05:59:49.028832 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-24 05:59:49.028846 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-24 05:59:49.028860 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-24 05:59:49.028875 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-24 05:59:49.028889 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-24 05:59:49.028904 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-24 05:59:49.028920 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-24 05:59:49.028934 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-24 05:59:49.028949 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-24 05:59:49.028964 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-24 05:59:49.028979 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-24 05:59:49.028993 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-24 05:59:49.029007 | orchestrator | 2026-03-24 05:59:49.029021 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-24 05:59:49.029037 | orchestrator | Tuesday 24 March 2026 05:59:24 +0000 (0:00:06.452) 1:10:05.060 ********* 2026-03-24 05:59:49.029052 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-24 05:59:49.029067 | orchestrator | 2026-03-24 05:59:49.029081 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-24 05:59:49.029095 | orchestrator | Tuesday 24 March 2026 05:59:25 +0000 (0:00:01.141) 1:10:06.202 ********* 2026-03-24 05:59:49.029109 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:59:49.029125 | orchestrator | 2026-03-24 05:59:49.029139 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-24 05:59:49.029153 | orchestrator | Tuesday 24 March 2026 05:59:26 +0000 (0:00:01.472) 1:10:07.674 ********* 2026-03-24 05:59:49.029168 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:59:49.029181 | orchestrator | 2026-03-24 05:59:49.029196 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-24 05:59:49.029210 | orchestrator | Tuesday 24 March 2026 05:59:28 +0000 (0:00:01.632) 1:10:09.307 ********* 2026-03-24 05:59:49.029224 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029250 | orchestrator | 2026-03-24 05:59:49.029264 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-24 05:59:49.029298 | orchestrator | Tuesday 24 March 2026 05:59:29 +0000 (0:00:00.773) 1:10:10.081 ********* 2026-03-24 05:59:49.029313 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029327 | orchestrator | 2026-03-24 05:59:49.029342 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-24 05:59:49.029356 | orchestrator | Tuesday 24 March 2026 05:59:29 +0000 (0:00:00.768) 1:10:10.850 ********* 2026-03-24 05:59:49.029370 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029384 | orchestrator | 2026-03-24 05:59:49.029397 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-24 05:59:49.029411 | orchestrator | Tuesday 24 March 2026 05:59:30 +0000 (0:00:00.821) 1:10:11.671 ********* 2026-03-24 05:59:49.029425 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029439 | orchestrator | 2026-03-24 05:59:49.029453 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-24 05:59:49.029467 | orchestrator | Tuesday 24 March 2026 05:59:31 +0000 (0:00:00.803) 1:10:12.475 ********* 2026-03-24 05:59:49.029480 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029494 | orchestrator | 2026-03-24 05:59:49.029507 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-24 05:59:49.029521 | orchestrator | Tuesday 24 March 2026 05:59:32 +0000 (0:00:00.771) 1:10:13.246 ********* 2026-03-24 05:59:49.029535 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029550 | orchestrator | 2026-03-24 05:59:49.029563 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-24 05:59:49.029577 | orchestrator | Tuesday 24 March 2026 05:59:33 +0000 (0:00:00.813) 1:10:14.060 ********* 2026-03-24 05:59:49.029590 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029604 | orchestrator | 2026-03-24 05:59:49.029618 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-24 05:59:49.029640 | orchestrator | Tuesday 24 March 2026 05:59:33 +0000 (0:00:00.754) 1:10:14.814 ********* 2026-03-24 05:59:49.029654 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029668 | orchestrator | 2026-03-24 05:59:49.029682 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-24 05:59:49.029696 | orchestrator | Tuesday 24 March 2026 05:59:34 +0000 (0:00:00.770) 1:10:15.584 ********* 2026-03-24 05:59:49.029710 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029724 | orchestrator | 2026-03-24 05:59:49.029739 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-24 05:59:49.029751 | orchestrator | Tuesday 24 March 2026 05:59:35 +0000 (0:00:00.803) 1:10:16.388 ********* 2026-03-24 05:59:49.029764 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029797 | orchestrator | 2026-03-24 05:59:49.029812 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-24 05:59:49.029827 | orchestrator | Tuesday 24 March 2026 05:59:36 +0000 (0:00:00.759) 1:10:17.148 ********* 2026-03-24 05:59:49.029839 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.029851 | orchestrator | 2026-03-24 05:59:49.029864 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-24 05:59:49.029878 | orchestrator | Tuesday 24 March 2026 05:59:37 +0000 (0:00:00.771) 1:10:17.920 ********* 2026-03-24 05:59:49.029890 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-24 05:59:49.029903 | orchestrator | 2026-03-24 05:59:49.029917 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-24 05:59:49.029930 | orchestrator | Tuesday 24 March 2026 05:59:41 +0000 (0:00:04.023) 1:10:21.943 ********* 2026-03-24 05:59:49.029944 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 05:59:49.029959 | orchestrator | 2026-03-24 05:59:49.029984 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-24 05:59:49.029998 | orchestrator | Tuesday 24 March 2026 05:59:42 +0000 (0:00:00.981) 1:10:22.925 ********* 2026-03-24 05:59:49.030078 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-24 05:59:49.030102 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-24 05:59:49.030117 | orchestrator | 2026-03-24 05:59:49.030166 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-24 05:59:49.030181 | orchestrator | Tuesday 24 March 2026 05:59:46 +0000 (0:00:04.645) 1:10:27.570 ********* 2026-03-24 05:59:49.030194 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.030207 | orchestrator | 2026-03-24 05:59:49.030221 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-24 05:59:49.030235 | orchestrator | Tuesday 24 March 2026 05:59:47 +0000 (0:00:00.774) 1:10:28.344 ********* 2026-03-24 05:59:49.030247 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.030260 | orchestrator | 2026-03-24 05:59:49.030274 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-24 05:59:49.030288 | orchestrator | Tuesday 24 March 2026 05:59:48 +0000 (0:00:00.774) 1:10:29.119 ********* 2026-03-24 05:59:49.030301 | orchestrator | skipping: [testbed-node-5] 2026-03-24 05:59:49.030314 | orchestrator | 2026-03-24 05:59:49.030328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-24 05:59:49.030354 | orchestrator | Tuesday 24 March 2026 05:59:49 +0000 (0:00:00.795) 1:10:29.915 ********* 2026-03-24 06:00:55.864882 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.864988 | orchestrator | 2026-03-24 06:00:55.865001 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-24 06:00:55.865010 | orchestrator | Tuesday 24 March 2026 05:59:49 +0000 (0:00:00.819) 1:10:30.735 ********* 2026-03-24 06:00:55.865018 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865025 | orchestrator | 2026-03-24 06:00:55.865033 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-24 06:00:55.865040 | orchestrator | Tuesday 24 March 2026 05:59:50 +0000 (0:00:00.801) 1:10:31.536 ********* 2026-03-24 06:00:55.865048 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:00:55.865056 | orchestrator | 2026-03-24 06:00:55.865064 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-24 06:00:55.865072 | orchestrator | Tuesday 24 March 2026 05:59:51 +0000 (0:00:00.881) 1:10:32.418 ********* 2026-03-24 06:00:55.865079 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 06:00:55.865087 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 06:00:55.865094 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 06:00:55.865101 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865109 | orchestrator | 2026-03-24 06:00:55.865116 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-24 06:00:55.865123 | orchestrator | Tuesday 24 March 2026 05:59:52 +0000 (0:00:01.049) 1:10:33.468 ********* 2026-03-24 06:00:55.865131 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 06:00:55.865138 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 06:00:55.865145 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 06:00:55.865165 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865193 | orchestrator | 2026-03-24 06:00:55.865201 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-24 06:00:55.865209 | orchestrator | Tuesday 24 March 2026 05:59:53 +0000 (0:00:01.050) 1:10:34.519 ********* 2026-03-24 06:00:55.865216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-24 06:00:55.865223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-24 06:00:55.865230 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-24 06:00:55.865238 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865245 | orchestrator | 2026-03-24 06:00:55.865252 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-24 06:00:55.865259 | orchestrator | Tuesday 24 March 2026 05:59:54 +0000 (0:00:01.085) 1:10:35.604 ********* 2026-03-24 06:00:55.865266 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:00:55.865273 | orchestrator | 2026-03-24 06:00:55.865281 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-24 06:00:55.865288 | orchestrator | Tuesday 24 March 2026 05:59:55 +0000 (0:00:00.835) 1:10:36.439 ********* 2026-03-24 06:00:55.865295 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-24 06:00:55.865302 | orchestrator | 2026-03-24 06:00:55.865309 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-24 06:00:55.865317 | orchestrator | Tuesday 24 March 2026 05:59:57 +0000 (0:00:01.650) 1:10:38.090 ********* 2026-03-24 06:00:55.865324 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:00:55.865332 | orchestrator | 2026-03-24 06:00:55.865339 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-24 06:00:55.865346 | orchestrator | Tuesday 24 March 2026 05:59:58 +0000 (0:00:01.387) 1:10:39.477 ********* 2026-03-24 06:00:55.865355 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-03-24 06:00:55.865363 | orchestrator | 2026-03-24 06:00:55.865372 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 06:00:55.865380 | orchestrator | Tuesday 24 March 2026 05:59:59 +0000 (0:00:01.086) 1:10:40.564 ********* 2026-03-24 06:00:55.865389 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 06:00:55.865397 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 06:00:55.865405 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 06:00:55.865413 | orchestrator | 2026-03-24 06:00:55.865421 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 06:00:55.865429 | orchestrator | Tuesday 24 March 2026 06:00:02 +0000 (0:00:03.302) 1:10:43.867 ********* 2026-03-24 06:00:55.865438 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-24 06:00:55.865446 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-24 06:00:55.865454 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:00:55.865462 | orchestrator | 2026-03-24 06:00:55.865470 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-24 06:00:55.865479 | orchestrator | Tuesday 24 March 2026 06:00:05 +0000 (0:00:02.053) 1:10:45.921 ********* 2026-03-24 06:00:55.865487 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865496 | orchestrator | 2026-03-24 06:00:55.865504 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-24 06:00:55.865512 | orchestrator | Tuesday 24 March 2026 06:00:05 +0000 (0:00:00.766) 1:10:46.687 ********* 2026-03-24 06:00:55.865520 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-03-24 06:00:55.865529 | orchestrator | 2026-03-24 06:00:55.865538 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-24 06:00:55.865546 | orchestrator | Tuesday 24 March 2026 06:00:06 +0000 (0:00:01.130) 1:10:47.817 ********* 2026-03-24 06:00:55.865556 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 06:00:55.865566 | orchestrator | 2026-03-24 06:00:55.865579 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-24 06:00:55.865587 | orchestrator | Tuesday 24 March 2026 06:00:08 +0000 (0:00:01.575) 1:10:49.393 ********* 2026-03-24 06:00:55.865609 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 06:00:55.865617 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-24 06:00:55.865625 | orchestrator | 2026-03-24 06:00:55.865632 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-24 06:00:55.865639 | orchestrator | Tuesday 24 March 2026 06:00:13 +0000 (0:00:05.389) 1:10:54.782 ********* 2026-03-24 06:00:55.865646 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-24 06:00:55.865653 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-24 06:00:55.865661 | orchestrator | 2026-03-24 06:00:55.865668 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-24 06:00:55.865675 | orchestrator | Tuesday 24 March 2026 06:00:17 +0000 (0:00:03.154) 1:10:57.936 ********* 2026-03-24 06:00:55.865682 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-24 06:00:55.865690 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:00:55.865697 | orchestrator | 2026-03-24 06:00:55.865704 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-24 06:00:55.865711 | orchestrator | Tuesday 24 March 2026 06:00:18 +0000 (0:00:01.697) 1:10:59.634 ********* 2026-03-24 06:00:55.865718 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-03-24 06:00:55.865725 | orchestrator | 2026-03-24 06:00:55.865733 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-24 06:00:55.865740 | orchestrator | Tuesday 24 March 2026 06:00:19 +0000 (0:00:01.133) 1:11:00.768 ********* 2026-03-24 06:00:55.865752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865821 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865829 | orchestrator | 2026-03-24 06:00:55.865836 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-24 06:00:55.865843 | orchestrator | Tuesday 24 March 2026 06:00:21 +0000 (0:00:01.598) 1:11:02.366 ********* 2026-03-24 06:00:55.865850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-24 06:00:55.865886 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865894 | orchestrator | 2026-03-24 06:00:55.865901 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-24 06:00:55.865915 | orchestrator | Tuesday 24 March 2026 06:00:23 +0000 (0:00:01.629) 1:11:03.996 ********* 2026-03-24 06:00:55.865922 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 06:00:55.865930 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 06:00:55.865937 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 06:00:55.865944 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 06:00:55.865953 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-24 06:00:55.865961 | orchestrator | 2026-03-24 06:00:55.865968 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-24 06:00:55.865975 | orchestrator | Tuesday 24 March 2026 06:00:55 +0000 (0:00:31.976) 1:11:35.973 ********* 2026-03-24 06:00:55.865982 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:00:55.865990 | orchestrator | 2026-03-24 06:00:55.865997 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-24 06:00:55.866008 | orchestrator | Tuesday 24 March 2026 06:00:55 +0000 (0:00:00.775) 1:11:36.748 ********* 2026-03-24 06:01:49.523567 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:01:49.523712 | orchestrator | 2026-03-24 06:01:49.523742 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-24 06:01:49.523763 | orchestrator | Tuesday 24 March 2026 06:00:56 +0000 (0:00:00.771) 1:11:37.519 ********* 2026-03-24 06:01:49.523780 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-03-24 06:01:49.523797 | orchestrator | 2026-03-24 06:01:49.523814 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-24 06:01:49.523860 | orchestrator | Tuesday 24 March 2026 06:00:57 +0000 (0:00:01.100) 1:11:38.619 ********* 2026-03-24 06:01:49.523878 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-03-24 06:01:49.523895 | orchestrator | 2026-03-24 06:01:49.523914 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-24 06:01:49.523926 | orchestrator | Tuesday 24 March 2026 06:00:58 +0000 (0:00:01.151) 1:11:39.770 ********* 2026-03-24 06:01:49.523936 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.523947 | orchestrator | 2026-03-24 06:01:49.523957 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-24 06:01:49.523967 | orchestrator | Tuesday 24 March 2026 06:01:00 +0000 (0:00:02.075) 1:11:41.846 ********* 2026-03-24 06:01:49.523977 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.523987 | orchestrator | 2026-03-24 06:01:49.523997 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-24 06:01:49.524007 | orchestrator | Tuesday 24 March 2026 06:01:02 +0000 (0:00:01.940) 1:11:43.787 ********* 2026-03-24 06:01:49.524017 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.524027 | orchestrator | 2026-03-24 06:01:49.524053 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-24 06:01:49.524063 | orchestrator | Tuesday 24 March 2026 06:01:05 +0000 (0:00:02.956) 1:11:46.744 ********* 2026-03-24 06:01:49.524074 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-24 06:01:49.524085 | orchestrator | 2026-03-24 06:01:49.524095 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-03-24 06:01:49.524105 | orchestrator | skipping: no hosts matched 2026-03-24 06:01:49.524115 | orchestrator | 2026-03-24 06:01:49.524125 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-03-24 06:01:49.524163 | orchestrator | skipping: no hosts matched 2026-03-24 06:01:49.524173 | orchestrator | 2026-03-24 06:01:49.524183 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-03-24 06:01:49.524193 | orchestrator | skipping: no hosts matched 2026-03-24 06:01:49.524202 | orchestrator | 2026-03-24 06:01:49.524212 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-03-24 06:01:49.524222 | orchestrator | 2026-03-24 06:01:49.524231 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-03-24 06:01:49.524241 | orchestrator | Tuesday 24 March 2026 06:01:11 +0000 (0:00:05.278) 1:11:52.023 ********* 2026-03-24 06:01:49.524250 | orchestrator | changed: [testbed-node-0] 2026-03-24 06:01:49.524260 | orchestrator | changed: [testbed-node-1] 2026-03-24 06:01:49.524270 | orchestrator | changed: [testbed-node-2] 2026-03-24 06:01:49.524279 | orchestrator | changed: [testbed-node-3] 2026-03-24 06:01:49.524289 | orchestrator | changed: [testbed-node-4] 2026-03-24 06:01:49.524298 | orchestrator | changed: [testbed-node-5] 2026-03-24 06:01:49.524308 | orchestrator | 2026-03-24 06:01:49.524317 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-03-24 06:01:49.524327 | orchestrator | Tuesday 24 March 2026 06:01:13 +0000 (0:00:02.557) 1:11:54.580 ********* 2026-03-24 06:01:49.524337 | orchestrator | changed: [testbed-node-1] 2026-03-24 06:01:49.524346 | orchestrator | changed: [testbed-node-3] 2026-03-24 06:01:49.524356 | orchestrator | changed: [testbed-node-2] 2026-03-24 06:01:49.524365 | orchestrator | changed: [testbed-node-4] 2026-03-24 06:01:49.524374 | orchestrator | changed: [testbed-node-5] 2026-03-24 06:01:49.524384 | orchestrator | changed: [testbed-node-0] 2026-03-24 06:01:49.524393 | orchestrator | 2026-03-24 06:01:49.524403 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 06:01:49.524413 | orchestrator | Tuesday 24 March 2026 06:01:17 +0000 (0:00:03.713) 1:11:58.294 ********* 2026-03-24 06:01:49.524423 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:01:49.524432 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:01:49.524442 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:01:49.524451 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:01:49.524461 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:01:49.524470 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.524480 | orchestrator | 2026-03-24 06:01:49.524489 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 06:01:49.524499 | orchestrator | Tuesday 24 March 2026 06:01:19 +0000 (0:00:02.328) 1:12:00.623 ********* 2026-03-24 06:01:49.524508 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:01:49.524518 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:01:49.524527 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:01:49.524537 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:01:49.524546 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:01:49.524555 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.524565 | orchestrator | 2026-03-24 06:01:49.524574 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-24 06:01:49.524584 | orchestrator | Tuesday 24 March 2026 06:01:21 +0000 (0:00:01.952) 1:12:02.575 ********* 2026-03-24 06:01:49.524595 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 06:01:49.524606 | orchestrator | 2026-03-24 06:01:49.524616 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-24 06:01:49.524625 | orchestrator | Tuesday 24 March 2026 06:01:23 +0000 (0:00:02.278) 1:12:04.854 ********* 2026-03-24 06:01:49.524635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 06:01:49.524645 | orchestrator | 2026-03-24 06:01:49.524674 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-24 06:01:49.524692 | orchestrator | Tuesday 24 March 2026 06:01:26 +0000 (0:00:02.326) 1:12:07.181 ********* 2026-03-24 06:01:49.524702 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:01:49.524711 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:01:49.524721 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:01:49.524731 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:01:49.524740 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:01:49.524750 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:01:49.524760 | orchestrator | 2026-03-24 06:01:49.524769 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-24 06:01:49.524779 | orchestrator | Tuesday 24 March 2026 06:01:28 +0000 (0:00:02.401) 1:12:09.582 ********* 2026-03-24 06:01:49.524788 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:01:49.524798 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:01:49.524807 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:01:49.524817 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:01:49.524845 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:01:49.524855 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.524864 | orchestrator | 2026-03-24 06:01:49.524874 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-24 06:01:49.524884 | orchestrator | Tuesday 24 March 2026 06:01:30 +0000 (0:00:02.116) 1:12:11.699 ********* 2026-03-24 06:01:49.524893 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:01:49.524903 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:01:49.524913 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:01:49.524922 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:01:49.524932 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:01:49.524942 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.524951 | orchestrator | 2026-03-24 06:01:49.524961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-24 06:01:49.524975 | orchestrator | Tuesday 24 March 2026 06:01:33 +0000 (0:00:02.552) 1:12:14.251 ********* 2026-03-24 06:01:49.524985 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:01:49.524995 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:01:49.525004 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:01:49.525014 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:01:49.525024 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.525033 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:01:49.525042 | orchestrator | 2026-03-24 06:01:49.525052 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-24 06:01:49.525062 | orchestrator | Tuesday 24 March 2026 06:01:35 +0000 (0:00:02.104) 1:12:16.356 ********* 2026-03-24 06:01:49.525072 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:01:49.525081 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:01:49.525091 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:01:49.525101 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:01:49.525110 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:01:49.525120 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:01:49.525129 | orchestrator | 2026-03-24 06:01:49.525139 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-24 06:01:49.525149 | orchestrator | Tuesday 24 March 2026 06:01:37 +0000 (0:00:02.108) 1:12:18.464 ********* 2026-03-24 06:01:49.525158 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:01:49.525168 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:01:49.525177 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:01:49.525187 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:01:49.525196 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:01:49.525206 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:01:49.525216 | orchestrator | 2026-03-24 06:01:49.525225 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-24 06:01:49.525235 | orchestrator | Tuesday 24 March 2026 06:01:39 +0000 (0:00:01.761) 1:12:20.226 ********* 2026-03-24 06:01:49.525245 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:01:49.525254 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:01:49.525264 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:01:49.525280 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:01:49.525290 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:01:49.525300 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:01:49.525309 | orchestrator | 2026-03-24 06:01:49.525319 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-24 06:01:49.525328 | orchestrator | Tuesday 24 March 2026 06:01:41 +0000 (0:00:01.801) 1:12:22.027 ********* 2026-03-24 06:01:49.525338 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:01:49.525347 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:01:49.525357 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:01:49.525366 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:01:49.525376 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:01:49.525385 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.525395 | orchestrator | 2026-03-24 06:01:49.525404 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-24 06:01:49.525414 | orchestrator | Tuesday 24 March 2026 06:01:43 +0000 (0:00:02.507) 1:12:24.535 ********* 2026-03-24 06:01:49.525424 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:01:49.525433 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:01:49.525443 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:01:49.525452 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:01:49.525461 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:01:49.525471 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:01:49.525480 | orchestrator | 2026-03-24 06:01:49.525490 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-24 06:01:49.525499 | orchestrator | Tuesday 24 March 2026 06:01:45 +0000 (0:00:02.168) 1:12:26.703 ********* 2026-03-24 06:01:49.525509 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:01:49.525519 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:01:49.525528 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:01:49.525538 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:01:49.525547 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:01:49.525557 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:01:49.525566 | orchestrator | 2026-03-24 06:01:49.525576 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-24 06:01:49.525586 | orchestrator | Tuesday 24 March 2026 06:01:47 +0000 (0:00:01.978) 1:12:28.682 ********* 2026-03-24 06:01:49.525595 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:01:49.525605 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:01:49.525614 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:01:49.525624 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:01:49.525634 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:01:49.525643 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:01:49.525653 | orchestrator | 2026-03-24 06:01:49.525669 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-24 06:02:45.187392 | orchestrator | Tuesday 24 March 2026 06:01:49 +0000 (0:00:01.727) 1:12:30.410 ********* 2026-03-24 06:02:45.187473 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.187481 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.187485 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.187489 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187494 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187498 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187502 | orchestrator | 2026-03-24 06:02:45.187507 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-24 06:02:45.187511 | orchestrator | Tuesday 24 March 2026 06:01:51 +0000 (0:00:01.992) 1:12:32.403 ********* 2026-03-24 06:02:45.187515 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.187519 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.187523 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.187527 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187531 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187535 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187538 | orchestrator | 2026-03-24 06:02:45.187542 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-24 06:02:45.187561 | orchestrator | Tuesday 24 March 2026 06:01:53 +0000 (0:00:01.742) 1:12:34.146 ********* 2026-03-24 06:02:45.187565 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.187569 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.187573 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.187577 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187580 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187584 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187588 | orchestrator | 2026-03-24 06:02:45.187602 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-24 06:02:45.187606 | orchestrator | Tuesday 24 March 2026 06:01:55 +0000 (0:00:01.979) 1:12:36.125 ********* 2026-03-24 06:02:45.187610 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.187613 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.187617 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.187621 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:02:45.187625 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:02:45.187629 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:02:45.187632 | orchestrator | 2026-03-24 06:02:45.187636 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-24 06:02:45.187640 | orchestrator | Tuesday 24 March 2026 06:01:57 +0000 (0:00:02.059) 1:12:38.185 ********* 2026-03-24 06:02:45.187644 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.187648 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.187652 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.187656 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:02:45.187660 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:02:45.187664 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:02:45.187668 | orchestrator | 2026-03-24 06:02:45.187672 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-24 06:02:45.187676 | orchestrator | Tuesday 24 March 2026 06:01:59 +0000 (0:00:02.093) 1:12:40.279 ********* 2026-03-24 06:02:45.187680 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187684 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.187688 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.187692 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:02:45.187696 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:02:45.187700 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:02:45.187703 | orchestrator | 2026-03-24 06:02:45.187707 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-24 06:02:45.187711 | orchestrator | Tuesday 24 March 2026 06:02:01 +0000 (0:00:02.122) 1:12:42.402 ********* 2026-03-24 06:02:45.187715 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187719 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.187723 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.187727 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187731 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187735 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187739 | orchestrator | 2026-03-24 06:02:45.187742 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-24 06:02:45.187746 | orchestrator | Tuesday 24 March 2026 06:02:03 +0000 (0:00:01.921) 1:12:44.323 ********* 2026-03-24 06:02:45.187750 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187754 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.187758 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.187762 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187766 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187770 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187773 | orchestrator | 2026-03-24 06:02:45.187777 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-24 06:02:45.187781 | orchestrator | Tuesday 24 March 2026 06:02:05 +0000 (0:00:02.234) 1:12:46.558 ********* 2026-03-24 06:02:45.187785 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187789 | orchestrator | 2026-03-24 06:02:45.187793 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-24 06:02:45.187801 | orchestrator | Tuesday 24 March 2026 06:02:09 +0000 (0:00:03.346) 1:12:49.905 ********* 2026-03-24 06:02:45.187805 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187809 | orchestrator | 2026-03-24 06:02:45.187813 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-24 06:02:45.187817 | orchestrator | Tuesday 24 March 2026 06:02:12 +0000 (0:00:03.189) 1:12:53.094 ********* 2026-03-24 06:02:45.187820 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187824 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.187828 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.187832 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187836 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187840 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187880 | orchestrator | 2026-03-24 06:02:45.187885 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-24 06:02:45.187889 | orchestrator | Tuesday 24 March 2026 06:02:14 +0000 (0:00:02.769) 1:12:55.863 ********* 2026-03-24 06:02:45.187893 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187897 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.187901 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.187905 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187909 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187913 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187916 | orchestrator | 2026-03-24 06:02:45.187920 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-24 06:02:45.187934 | orchestrator | Tuesday 24 March 2026 06:02:17 +0000 (0:00:02.084) 1:12:57.948 ********* 2026-03-24 06:02:45.187940 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-24 06:02:45.187945 | orchestrator | 2026-03-24 06:02:45.187949 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-24 06:02:45.187953 | orchestrator | Tuesday 24 March 2026 06:02:19 +0000 (0:00:02.457) 1:13:00.405 ********* 2026-03-24 06:02:45.187957 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.187961 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.187965 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.187969 | orchestrator | ok: [testbed-node-3] 2026-03-24 06:02:45.187972 | orchestrator | ok: [testbed-node-4] 2026-03-24 06:02:45.187976 | orchestrator | ok: [testbed-node-5] 2026-03-24 06:02:45.187980 | orchestrator | 2026-03-24 06:02:45.187984 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-24 06:02:45.187988 | orchestrator | Tuesday 24 March 2026 06:02:22 +0000 (0:00:02.789) 1:13:03.195 ********* 2026-03-24 06:02:45.187992 | orchestrator | changed: [testbed-node-3] 2026-03-24 06:02:45.187996 | orchestrator | changed: [testbed-node-0] 2026-03-24 06:02:45.188000 | orchestrator | changed: [testbed-node-1] 2026-03-24 06:02:45.188004 | orchestrator | changed: [testbed-node-2] 2026-03-24 06:02:45.188008 | orchestrator | changed: [testbed-node-4] 2026-03-24 06:02:45.188012 | orchestrator | changed: [testbed-node-5] 2026-03-24 06:02:45.188016 | orchestrator | 2026-03-24 06:02:45.188019 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-03-24 06:02:45.188023 | orchestrator | 2026-03-24 06:02:45.188030 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 06:02:45.188034 | orchestrator | Tuesday 24 March 2026 06:02:26 +0000 (0:00:04.668) 1:13:07.864 ********* 2026-03-24 06:02:45.188038 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.188042 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.188046 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.188050 | orchestrator | 2026-03-24 06:02:45.188054 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 06:02:45.188058 | orchestrator | Tuesday 24 March 2026 06:02:28 +0000 (0:00:01.676) 1:13:09.541 ********* 2026-03-24 06:02:45.188062 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.188066 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:02:45.188077 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:02:45.188084 | orchestrator | 2026-03-24 06:02:45.188090 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-24 06:02:45.188097 | orchestrator | Tuesday 24 March 2026 06:02:30 +0000 (0:00:01.672) 1:13:11.213 ********* 2026-03-24 06:02:45.188103 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:02:45.188109 | orchestrator | 2026-03-24 06:02:45.188115 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-24 06:02:45.188121 | orchestrator | Tuesday 24 March 2026 06:02:32 +0000 (0:00:02.286) 1:13:13.499 ********* 2026-03-24 06:02:45.188127 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.188133 | orchestrator | 2026-03-24 06:02:45.188139 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-03-24 06:02:45.188144 | orchestrator | 2026-03-24 06:02:45.188150 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-03-24 06:02:45.188156 | orchestrator | Tuesday 24 March 2026 06:02:34 +0000 (0:00:01.853) 1:13:15.353 ********* 2026-03-24 06:02:45.188163 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.188169 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.188175 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.188182 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:02:45.188188 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:02:45.188194 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:02:45.188200 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:02:45.188207 | orchestrator | 2026-03-24 06:02:45.188213 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 06:02:45.188219 | orchestrator | Tuesday 24 March 2026 06:02:36 +0000 (0:00:02.408) 1:13:17.761 ********* 2026-03-24 06:02:45.188226 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.188231 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.188234 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.188238 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:02:45.188242 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:02:45.188246 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:02:45.188250 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:02:45.188254 | orchestrator | 2026-03-24 06:02:45.188258 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-24 06:02:45.188262 | orchestrator | Tuesday 24 March 2026 06:02:39 +0000 (0:00:02.338) 1:13:20.099 ********* 2026-03-24 06:02:45.188266 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.188270 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.188274 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.188278 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:02:45.188282 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:02:45.188287 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:02:45.188293 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:02:45.188300 | orchestrator | 2026-03-24 06:02:45.188306 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-24 06:02:45.188312 | orchestrator | Tuesday 24 March 2026 06:02:41 +0000 (0:00:02.402) 1:13:22.501 ********* 2026-03-24 06:02:45.188318 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.188325 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.188332 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.188339 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:02:45.188345 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:02:45.188352 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:02:45.188358 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:02:45.188364 | orchestrator | 2026-03-24 06:02:45.188370 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-03-24 06:02:45.188374 | orchestrator | Tuesday 24 March 2026 06:02:44 +0000 (0:00:02.405) 1:13:24.907 ********* 2026-03-24 06:02:45.188378 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:02:45.188387 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:02:45.188391 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:02:45.188400 | orchestrator | skipping: [testbed-node-3] 2026-03-24 06:03:34.432036 | orchestrator | skipping: [testbed-node-4] 2026-03-24 06:03:34.432197 | orchestrator | skipping: [testbed-node-5] 2026-03-24 06:03:34.432223 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432243 | orchestrator | 2026-03-24 06:03:34.432266 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-03-24 06:03:34.432286 | orchestrator | 2026-03-24 06:03:34.432298 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-03-24 06:03:34.432310 | orchestrator | Tuesday 24 March 2026 06:02:47 +0000 (0:00:03.307) 1:13:28.214 ********* 2026-03-24 06:03:34.432328 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-03-24 06:03:34.432348 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-03-24 06:03:34.432368 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-03-24 06:03:34.432382 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432394 | orchestrator | 2026-03-24 06:03:34.432405 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-24 06:03:34.432416 | orchestrator | Tuesday 24 March 2026 06:02:48 +0000 (0:00:01.266) 1:13:29.481 ********* 2026-03-24 06:03:34.432427 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432438 | orchestrator | 2026-03-24 06:03:34.432449 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-24 06:03:34.432460 | orchestrator | Tuesday 24 March 2026 06:02:49 +0000 (0:00:01.131) 1:13:30.613 ********* 2026-03-24 06:03:34.432471 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432482 | orchestrator | 2026-03-24 06:03:34.432493 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-24 06:03:34.432522 | orchestrator | Tuesday 24 March 2026 06:02:50 +0000 (0:00:01.118) 1:13:31.731 ********* 2026-03-24 06:03:34.432534 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432545 | orchestrator | 2026-03-24 06:03:34.432556 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-24 06:03:34.432567 | orchestrator | Tuesday 24 March 2026 06:02:51 +0000 (0:00:01.124) 1:13:32.856 ********* 2026-03-24 06:03:34.432578 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432588 | orchestrator | 2026-03-24 06:03:34.432599 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-03-24 06:03:34.432610 | orchestrator | Tuesday 24 March 2026 06:02:53 +0000 (0:00:01.129) 1:13:33.986 ********* 2026-03-24 06:03:34.432621 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-03-24 06:03:34.432632 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-03-24 06:03:34.432643 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432654 | orchestrator | 2026-03-24 06:03:34.432665 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-03-24 06:03:34.432676 | orchestrator | Tuesday 24 March 2026 06:02:54 +0000 (0:00:01.119) 1:13:35.105 ********* 2026-03-24 06:03:34.432687 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432697 | orchestrator | 2026-03-24 06:03:34.432708 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-03-24 06:03:34.432719 | orchestrator | Tuesday 24 March 2026 06:02:55 +0000 (0:00:01.188) 1:13:36.293 ********* 2026-03-24 06:03:34.432730 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432741 | orchestrator | 2026-03-24 06:03:34.432752 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-03-24 06:03:34.432763 | orchestrator | Tuesday 24 March 2026 06:02:56 +0000 (0:00:01.141) 1:13:37.435 ********* 2026-03-24 06:03:34.432774 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432785 | orchestrator | 2026-03-24 06:03:34.432796 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-03-24 06:03:34.432807 | orchestrator | Tuesday 24 March 2026 06:02:57 +0000 (0:00:01.192) 1:13:38.628 ********* 2026-03-24 06:03:34.432841 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-03-24 06:03:34.432852 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-03-24 06:03:34.432863 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432930 | orchestrator | 2026-03-24 06:03:34.432942 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-03-24 06:03:34.432953 | orchestrator | Tuesday 24 March 2026 06:02:58 +0000 (0:00:01.155) 1:13:39.784 ********* 2026-03-24 06:03:34.432964 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.432975 | orchestrator | 2026-03-24 06:03:34.432986 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-03-24 06:03:34.432997 | orchestrator | Tuesday 24 March 2026 06:02:59 +0000 (0:00:01.111) 1:13:40.895 ********* 2026-03-24 06:03:34.433008 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.433019 | orchestrator | 2026-03-24 06:03:34.433029 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-03-24 06:03:34.433040 | orchestrator | Tuesday 24 March 2026 06:03:01 +0000 (0:00:01.156) 1:13:42.052 ********* 2026-03-24 06:03:34.433051 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.433062 | orchestrator | 2026-03-24 06:03:34.433073 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-03-24 06:03:34.433084 | orchestrator | Tuesday 24 March 2026 06:03:02 +0000 (0:00:01.121) 1:13:43.173 ********* 2026-03-24 06:03:34.433095 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:03:34.433106 | orchestrator | 2026-03-24 06:03:34.433117 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-03-24 06:03:34.433128 | orchestrator | 2026-03-24 06:03:34.433139 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-24 06:03:34.433149 | orchestrator | Tuesday 24 March 2026 06:03:03 +0000 (0:00:01.597) 1:13:44.772 ********* 2026-03-24 06:03:34.433161 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:03:34.433181 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:03:34.433200 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:03:34.433218 | orchestrator | 2026-03-24 06:03:34.433239 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-24 06:03:34.433251 | orchestrator | Tuesday 24 March 2026 06:03:05 +0000 (0:00:01.997) 1:13:46.769 ********* 2026-03-24 06:03:34.433262 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:03:34.433273 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:03:34.433305 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:03:34.433316 | orchestrator | 2026-03-24 06:03:34.433328 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-24 06:03:34.433339 | orchestrator | Tuesday 24 March 2026 06:03:07 +0000 (0:00:01.539) 1:13:48.308 ********* 2026-03-24 06:03:34.433350 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:03:34.433361 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:03:34.433372 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:03:34.433383 | orchestrator | 2026-03-24 06:03:34.433393 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-24 06:03:34.433405 | orchestrator | Tuesday 24 March 2026 06:03:08 +0000 (0:00:01.534) 1:13:49.843 ********* 2026-03-24 06:03:34.433415 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:03:34.433427 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:03:34.433438 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:03:34.433449 | orchestrator | 2026-03-24 06:03:34.433460 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-24 06:03:34.433471 | orchestrator | Tuesday 24 March 2026 06:03:10 +0000 (0:00:01.653) 1:13:51.496 ********* 2026-03-24 06:03:34.433482 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:03:34.433493 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:03:34.433504 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:03:34.433515 | orchestrator | 2026-03-24 06:03:34.433533 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-03-24 06:03:34.433564 | orchestrator | Tuesday 24 March 2026 06:03:11 +0000 (0:00:01.349) 1:13:52.846 ********* 2026-03-24 06:03:34.433582 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:03:34.433609 | orchestrator | skipping: [testbed-node-1] 2026-03-24 06:03:34.433628 | orchestrator | skipping: [testbed-node-2] 2026-03-24 06:03:34.433647 | orchestrator | 2026-03-24 06:03:34.433666 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-03-24 06:03:34.433685 | orchestrator | Tuesday 24 March 2026 06:03:13 +0000 (0:00:01.418) 1:13:54.265 ********* 2026-03-24 06:03:34.433704 | orchestrator | skipping: [testbed-node-0] 2026-03-24 06:03:34.433722 | orchestrator | 2026-03-24 06:03:34.433741 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-03-24 06:03:34.433760 | orchestrator | 2026-03-24 06:03:34.433781 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-24 06:03:34.433800 | orchestrator | Tuesday 24 March 2026 06:03:15 +0000 (0:00:01.840) 1:13:56.106 ********* 2026-03-24 06:03:34.433820 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.433839 | orchestrator | 2026-03-24 06:03:34.433856 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-24 06:03:34.433899 | orchestrator | Tuesday 24 March 2026 06:03:16 +0000 (0:00:01.485) 1:13:57.592 ********* 2026-03-24 06:03:34.433917 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.433935 | orchestrator | 2026-03-24 06:03:34.433953 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-03-24 06:03:34.433970 | orchestrator | Tuesday 24 March 2026 06:03:17 +0000 (0:00:01.129) 1:13:58.722 ********* 2026-03-24 06:03:34.433988 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.434007 | orchestrator | 2026-03-24 06:03:34.434107 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-03-24 06:03:34.434129 | orchestrator | Tuesday 24 March 2026 06:03:18 +0000 (0:00:01.147) 1:13:59.869 ********* 2026-03-24 06:03:34.434149 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.434169 | orchestrator | 2026-03-24 06:03:34.434188 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-03-24 06:03:34.434206 | orchestrator | Tuesday 24 March 2026 06:03:21 +0000 (0:00:03.000) 1:14:02.869 ********* 2026-03-24 06:03:34.434225 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.434240 | orchestrator | 2026-03-24 06:03:34.434251 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-03-24 06:03:34.434261 | orchestrator | Tuesday 24 March 2026 06:03:25 +0000 (0:00:03.370) 1:14:06.240 ********* 2026-03-24 06:03:34.434272 | orchestrator | changed: [testbed-node-0] 2026-03-24 06:03:34.434283 | orchestrator | 2026-03-24 06:03:34.434294 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-03-24 06:03:34.434305 | orchestrator | 2026-03-24 06:03:34.434315 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-03-24 06:03:34.434326 | orchestrator | Tuesday 24 March 2026 06:03:27 +0000 (0:00:01.812) 1:14:08.053 ********* 2026-03-24 06:03:34.434337 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.434348 | orchestrator | ok: [testbed-node-1] 2026-03-24 06:03:34.434359 | orchestrator | ok: [testbed-node-2] 2026-03-24 06:03:34.434369 | orchestrator | 2026-03-24 06:03:34.434380 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-03-24 06:03:34.434391 | orchestrator | Tuesday 24 March 2026 06:03:28 +0000 (0:00:01.757) 1:14:09.811 ********* 2026-03-24 06:03:34.434402 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.434412 | orchestrator | 2026-03-24 06:03:34.434423 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-03-24 06:03:34.434434 | orchestrator | Tuesday 24 March 2026 06:03:31 +0000 (0:00:02.203) 1:14:12.015 ********* 2026-03-24 06:03:34.434444 | orchestrator | ok: [testbed-node-0] 2026-03-24 06:03:34.434455 | orchestrator | 2026-03-24 06:03:34.434466 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 06:03:34.434477 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-24 06:03:34.434508 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-03-24 06:03:34.434520 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-03-24 06:03:34.434531 | orchestrator | testbed-node-1 : ok=191  changed=16  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-03-24 06:03:34.434555 | orchestrator | testbed-node-2 : ok=196  changed=15  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-03-24 06:03:34.872605 | orchestrator | testbed-node-3 : ok=317  changed=21  unreachable=0 failed=0 skipped=362  rescued=0 ignored=0 2026-03-24 06:03:34.872716 | orchestrator | testbed-node-4 : ok=307  changed=18  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-03-24 06:03:34.872724 | orchestrator | testbed-node-5 : ok=303  changed=18  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-03-24 06:03:34.872729 | orchestrator | 2026-03-24 06:03:34.872734 | orchestrator | 2026-03-24 06:03:34.872738 | orchestrator | 2026-03-24 06:03:34.872742 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 06:03:34.872748 | orchestrator | Tuesday 24 March 2026 06:03:34 +0000 (0:00:03.288) 1:14:15.303 ********* 2026-03-24 06:03:34.872752 | orchestrator | =============================================================================== 2026-03-24 06:03:34.872756 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 75.22s 2026-03-24 06:03:34.872760 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 74.50s 2026-03-24 06:03:34.872764 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.74s 2026-03-24 06:03:34.872787 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.52s 2026-03-24 06:03:34.872793 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.98s 2026-03-24 06:03:34.872799 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.99s 2026-03-24 06:03:34.872806 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.76s 2026-03-24 06:03:34.872812 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 26.85s 2026-03-24 06:03:34.872819 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 23.09s 2026-03-24 06:03:34.872825 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.01s 2026-03-24 06:03:34.872831 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.99s 2026-03-24 06:03:34.872838 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.39s 2026-03-24 06:03:34.872844 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.37s 2026-03-24 06:03:34.872851 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.47s 2026-03-24 06:03:34.872857 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.12s 2026-03-24 06:03:34.872863 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.83s 2026-03-24 06:03:34.872953 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.03s 2026-03-24 06:03:34.872960 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.22s 2026-03-24 06:03:34.872966 | orchestrator | Stop ceph mon ---------------------------------------------------------- 10.94s 2026-03-24 06:03:34.872973 | orchestrator | Restart active mds ----------------------------------------------------- 10.35s 2026-03-24 06:03:35.069598 | orchestrator | + osism apply cephclient 2026-03-24 06:03:36.667485 | orchestrator | 2026-03-24 06:03:36 | INFO  | Task cbe04b38-c6bc-4208-99c8-e6273ccf199b (cephclient) was prepared for execution. 2026-03-24 06:03:36.667664 | orchestrator | 2026-03-24 06:03:36 | INFO  | It takes a moment until task cbe04b38-c6bc-4208-99c8-e6273ccf199b (cephclient) has been started and output is visible here. 2026-03-24 06:04:04.334735 | orchestrator | 2026-03-24 06:04:04.334858 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-24 06:04:04.334908 | orchestrator | 2026-03-24 06:04:04.334923 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-24 06:04:04.334935 | orchestrator | Tuesday 24 March 2026 06:03:42 +0000 (0:00:01.742) 0:00:01.742 ********* 2026-03-24 06:04:04.334946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-24 06:04:04.334959 | orchestrator | 2026-03-24 06:04:04.334971 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-24 06:04:04.334982 | orchestrator | Tuesday 24 March 2026 06:03:44 +0000 (0:00:01.786) 0:00:03.529 ********* 2026-03-24 06:04:04.334994 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-24 06:04:04.335005 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-24 06:04:04.335017 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-24 06:04:04.335028 | orchestrator | 2026-03-24 06:04:04.335040 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-24 06:04:04.335051 | orchestrator | Tuesday 24 March 2026 06:03:47 +0000 (0:00:02.503) 0:00:06.033 ********* 2026-03-24 06:04:04.335140 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-24 06:04:04.335162 | orchestrator | 2026-03-24 06:04:04.335179 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-24 06:04:04.335197 | orchestrator | Tuesday 24 March 2026 06:03:49 +0000 (0:00:02.093) 0:00:08.126 ********* 2026-03-24 06:04:04.335216 | orchestrator | ok: [testbed-manager] 2026-03-24 06:04:04.335235 | orchestrator | 2026-03-24 06:04:04.335254 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-24 06:04:04.335274 | orchestrator | Tuesday 24 March 2026 06:03:51 +0000 (0:00:01.899) 0:00:10.025 ********* 2026-03-24 06:04:04.335294 | orchestrator | ok: [testbed-manager] 2026-03-24 06:04:04.335313 | orchestrator | 2026-03-24 06:04:04.335333 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-24 06:04:04.335353 | orchestrator | Tuesday 24 March 2026 06:03:53 +0000 (0:00:01.891) 0:00:11.917 ********* 2026-03-24 06:04:04.335373 | orchestrator | ok: [testbed-manager] 2026-03-24 06:04:04.335392 | orchestrator | 2026-03-24 06:04:04.335410 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-24 06:04:04.335422 | orchestrator | Tuesday 24 March 2026 06:03:55 +0000 (0:00:02.044) 0:00:13.961 ********* 2026-03-24 06:04:04.335433 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-24 06:04:04.335445 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-03-24 06:04:04.335457 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-24 06:04:04.335468 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-24 06:04:04.335480 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-24 06:04:04.335491 | orchestrator | 2026-03-24 06:04:04.335502 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-24 06:04:04.335513 | orchestrator | Tuesday 24 March 2026 06:04:00 +0000 (0:00:04.928) 0:00:18.890 ********* 2026-03-24 06:04:04.335524 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-24 06:04:04.335535 | orchestrator | 2026-03-24 06:04:04.335545 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-24 06:04:04.335576 | orchestrator | Tuesday 24 March 2026 06:04:01 +0000 (0:00:01.415) 0:00:20.305 ********* 2026-03-24 06:04:04.335588 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:04:04.335599 | orchestrator | 2026-03-24 06:04:04.335610 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-24 06:04:04.335648 | orchestrator | Tuesday 24 March 2026 06:04:02 +0000 (0:00:01.086) 0:00:21.391 ********* 2026-03-24 06:04:04.335660 | orchestrator | skipping: [testbed-manager] 2026-03-24 06:04:04.335671 | orchestrator | 2026-03-24 06:04:04.335681 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-24 06:04:04.335693 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-24 06:04:04.335704 | orchestrator | 2026-03-24 06:04:04.335715 | orchestrator | 2026-03-24 06:04:04.335726 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-24 06:04:04.335737 | orchestrator | Tuesday 24 March 2026 06:04:04 +0000 (0:00:01.469) 0:00:22.861 ********* 2026-03-24 06:04:04.335747 | orchestrator | =============================================================================== 2026-03-24 06:04:04.335758 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.93s 2026-03-24 06:04:04.335768 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.50s 2026-03-24 06:04:04.335779 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.09s 2026-03-24 06:04:04.335789 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.04s 2026-03-24 06:04:04.335800 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.90s 2026-03-24 06:04:04.335811 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.89s 2026-03-24 06:04:04.335821 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.79s 2026-03-24 06:04:04.335832 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.47s 2026-03-24 06:04:04.335843 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.42s 2026-03-24 06:04:04.335853 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.09s 2026-03-24 06:04:04.688308 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-24 06:04:04.688405 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-03-24 06:04:04.698144 | orchestrator | + set -e 2026-03-24 06:04:04.699253 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-24 06:04:04.699324 | orchestrator | ++ export INTERACTIVE=false 2026-03-24 06:04:04.699339 | orchestrator | ++ INTERACTIVE=false 2026-03-24 06:04:04.699352 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-24 06:04:04.699369 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-24 06:04:04.699386 | orchestrator | + source /opt/manager-vars.sh 2026-03-24 06:04:04.699410 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-24 06:04:04.699429 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-24 06:04:04.699446 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-24 06:04:04.699461 | orchestrator | ++ CEPH_VERSION=reef 2026-03-24 06:04:04.699478 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-24 06:04:04.699492 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-24 06:04:04.699506 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-24 06:04:04.699521 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-24 06:04:04.699539 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-24 06:04:04.699599 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-24 06:04:04.699613 | orchestrator | ++ export ARA=false 2026-03-24 06:04:04.699623 | orchestrator | ++ ARA=false 2026-03-24 06:04:04.699633 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-24 06:04:04.699643 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-24 06:04:04.699653 | orchestrator | ++ export TEMPEST=false 2026-03-24 06:04:04.699662 | orchestrator | ++ TEMPEST=false 2026-03-24 06:04:04.699672 | orchestrator | ++ export IS_ZUUL=true 2026-03-24 06:04:04.699681 | orchestrator | ++ IS_ZUUL=true 2026-03-24 06:04:04.699691 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 06:04:04.699701 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.246 2026-03-24 06:04:04.699710 | orchestrator | ++ export EXTERNAL_API=false 2026-03-24 06:04:04.699720 | orchestrator | ++ EXTERNAL_API=false 2026-03-24 06:04:04.699729 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-24 06:04:04.699739 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-24 06:04:04.699748 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-24 06:04:04.699758 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-24 06:04:04.699768 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-24 06:04:04.699801 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-24 06:04:04.699811 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-24 06:04:04.699821 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-24 06:04:04.699830 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-24 06:04:04.699974 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-24 06:04:04.705746 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-24 06:04:04.705805 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-24 06:04:04.705817 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-24 06:04:04.705826 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-03-24 06:04:24.770581 | orchestrator | 2026-03-24 06:04:24 | ERROR  | Unable to get ansible vault password 2026-03-24 06:04:24.770712 | orchestrator | 2026-03-24 06:04:24 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-24 06:04:24.770737 | orchestrator | 2026-03-24 06:04:24 | ERROR  | Dropping encrypted entries 2026-03-24 06:04:24.804627 | orchestrator | 2026-03-24 06:04:24 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-24 06:04:24.805520 | orchestrator | 2026-03-24 06:04:24 | INFO  | Kolla configuration check passed 2026-03-24 06:04:24.980791 | orchestrator | 2026-03-24 06:04:24 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-03-24 06:04:24.994668 | orchestrator | 2026-03-24 06:04:24 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-03-24 06:04:25.214254 | orchestrator | + osism migrate rabbitmq3to4 list 2026-03-24 06:04:43.282676 | orchestrator | 2026-03-24 06:04:43 | ERROR  | Unable to get ansible vault password 2026-03-24 06:04:43.282794 | orchestrator | 2026-03-24 06:04:43 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-24 06:04:43.282832 | orchestrator | 2026-03-24 06:04:43 | ERROR  | Dropping encrypted entries 2026-03-24 06:04:43.324134 | orchestrator | 2026-03-24 06:04:43 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-24 06:04:43.470778 | orchestrator | 2026-03-24 06:04:43 | INFO  | Found 208 classic queue(s) in vhost '/': 2026-03-24 06:04:43.471105 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-03-24 06:04:43.471136 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-03-24 06:04:43.471148 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-03-24 06:04:43.471169 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-03-24 06:04:43.471180 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - barbican.workers_fanout_baccbe8f70c746758908be3d18abec79 (vhost: /, messages: 0) 2026-03-24 06:04:43.471304 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - barbican.workers_fanout_cad041757b0849d2987a9173dd0e2d71 (vhost: /, messages: 0) 2026-03-24 06:04:43.471543 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - barbican.workers_fanout_f4a0970e21af44d5b369af6d529fbd02 (vhost: /, messages: 0) 2026-03-24 06:04:43.471742 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-03-24 06:04:43.471992 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central (vhost: /, messages: 1) 2026-03-24 06:04:43.472247 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.472448 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.472731 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.472921 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central_fanout_04be8287536f4571bde325f4a9c41d23 (vhost: /, messages: 0) 2026-03-24 06:04:43.473116 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central_fanout_15601a61a4d745229d535656d598ac0d (vhost: /, messages: 0) 2026-03-24 06:04:43.473290 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central_fanout_219fc58ab0194c91961227d56038115f (vhost: /, messages: 0) 2026-03-24 06:04:43.473472 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central_fanout_27a36489672a47dabe1ae8238eb38a4e (vhost: /, messages: 0) 2026-03-24 06:04:43.473725 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central_fanout_75578b07217d44e78f437c59d7f04866 (vhost: /, messages: 0) 2026-03-24 06:04:43.474514 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - central_fanout_fd65134ef7b24f6fae570f91e8aa91a6 (vhost: /, messages: 0) 2026-03-24 06:04:43.474718 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-03-24 06:04:43.474747 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.474763 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.475251 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.475279 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-backup_fanout_14eb525bd8ad4800b412a8908bd17b3a (vhost: /, messages: 0) 2026-03-24 06:04:43.475289 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-backup_fanout_4040d2c8121e45e6890ebeefa936050a (vhost: /, messages: 0) 2026-03-24 06:04:43.475504 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-backup_fanout_aa1eba65ee764f5e849f90595a166a18 (vhost: /, messages: 0) 2026-03-24 06:04:43.475723 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-03-24 06:04:43.475880 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.476180 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.476343 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.476521 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-scheduler_fanout_131bec08eb9a4ad78f4aeeb0e5cef079 (vhost: /, messages: 0) 2026-03-24 06:04:43.476909 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-scheduler_fanout_1f2cf4f0152f4db696a2f93996c65736 (vhost: /, messages: 0) 2026-03-24 06:04:43.477006 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-scheduler_fanout_9bd62bb9eaf04b208518fca4553d3cae (vhost: /, messages: 0) 2026-03-24 06:04:43.477107 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-03-24 06:04:43.477397 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-03-24 06:04:43.477506 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.477674 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_4fabdcbab875426794a395431c674960 (vhost: /, messages: 0) 2026-03-24 06:04:43.479713 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-03-24 06:04:43.479781 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.479791 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_58bf9d92acd945e5afebba0e71a0ad69 (vhost: /, messages: 0) 2026-03-24 06:04:43.479800 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-03-24 06:04:43.479809 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.479817 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_0391e34e1b5a4de7be16812e02557519 (vhost: /, messages: 0) 2026-03-24 06:04:43.479825 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume_fanout_524f9d26e9ff46908c7ff26f1d31b3ba (vhost: /, messages: 0) 2026-03-24 06:04:43.479833 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume_fanout_6901eb8220b14b6ea9efec92cffe777a (vhost: /, messages: 0) 2026-03-24 06:04:43.479939 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - cinder-volume_fanout_cc27ba146c0e42ec8a740e5b343311f6 (vhost: /, messages: 0) 2026-03-24 06:04:43.479949 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - compute (vhost: /, messages: 0) 2026-03-24 06:04:43.479958 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-03-24 06:04:43.479966 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-03-24 06:04:43.479974 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-03-24 06:04:43.479982 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - compute_fanout_089f87392e104cfe90ca3b82eca15a2b (vhost: /, messages: 0) 2026-03-24 06:04:43.479990 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - compute_fanout_8c5edfb1b0ed444487c48773c1374c1d (vhost: /, messages: 0) 2026-03-24 06:04:43.479998 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - compute_fanout_e876e731d4d74091a7e0137d53516238 (vhost: /, messages: 0) 2026-03-24 06:04:43.480006 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor (vhost: /, messages: 0) 2026-03-24 06:04:43.480014 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.480022 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.480030 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.480038 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor_fanout_36b8c3bd3c2e4d2599e24399d07bb66b (vhost: /, messages: 0) 2026-03-24 06:04:43.480099 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor_fanout_52aa0a6453634c8ba1150ff62d24a443 (vhost: /, messages: 0) 2026-03-24 06:04:43.480110 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor_fanout_9684afb97660497ba0ae7db2ba4a5496 (vhost: /, messages: 0) 2026-03-24 06:04:43.480118 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor_fanout_a009b641b1534010b0d07198ded730da (vhost: /, messages: 0) 2026-03-24 06:04:43.480133 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor_fanout_cb48c2f046bf43a291ffa8393a09556e (vhost: /, messages: 0) 2026-03-24 06:04:43.480141 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - conductor_fanout_f110aeab85e14325a856618ce41b6d07 (vhost: /, messages: 0) 2026-03-24 06:04:43.480159 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - event.sample (vhost: /, messages: 9) 2026-03-24 06:04:43.480176 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-03-24 06:04:43.480184 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor.6rhv67ck6sfr (vhost: /, messages: 0) 2026-03-24 06:04:43.480350 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor.kj7twh3tu7fj (vhost: /, messages: 0) 2026-03-24 06:04:43.480422 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor.lfodvt3sxfc4 (vhost: /, messages: 0) 2026-03-24 06:04:43.480584 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_1392e2c21da041d69052a3768291f6c9 (vhost: /, messages: 0) 2026-03-24 06:04:43.480680 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_2a8c319499da4276a03a67e8f2a45044 (vhost: /, messages: 0) 2026-03-24 06:04:43.480700 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_46584f7b11bb4a8a8de481a408852b79 (vhost: /, messages: 0) 2026-03-24 06:04:43.480876 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_510b689b6e244c8492c10df8fc902d40 (vhost: /, messages: 0) 2026-03-24 06:04:43.480993 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_6ae9790719f1422d9ef0e8073e6fb40b (vhost: /, messages: 0) 2026-03-24 06:04:43.481060 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_8fc5eb680a0b46909600b5364043604a (vhost: /, messages: 0) 2026-03-24 06:04:43.481256 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_9f492fa0585d4a47ad2955cd0415ac10 (vhost: /, messages: 0) 2026-03-24 06:04:43.481278 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_d0715372557a4f298e4193b5a3dfdb60 (vhost: /, messages: 0) 2026-03-24 06:04:43.481378 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - magnum-conductor_fanout_e68c547de3404716a3afee3545faeed2 (vhost: /, messages: 0) 2026-03-24 06:04:43.481390 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-03-24 06:04:43.481469 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.481480 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.481651 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.481800 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-data_fanout_15956c2b320e4f3da3861e43d0208148 (vhost: /, messages: 0) 2026-03-24 06:04:43.481881 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-data_fanout_8ceb4bec2f544ac3b30f09dded7e0c58 (vhost: /, messages: 0) 2026-03-24 06:04:43.481915 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-data_fanout_f80dae4efb4841f6907a91c720f8a2d5 (vhost: /, messages: 0) 2026-03-24 06:04:43.482150 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-03-24 06:04:43.482163 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.482242 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.482406 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.482427 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-scheduler_fanout_1464841fae984cd79b7aae3bc8f39755 (vhost: /, messages: 0) 2026-03-24 06:04:43.482519 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-scheduler_fanout_4437c7712a6b4d75af7f4a9a3f250640 (vhost: /, messages: 0) 2026-03-24 06:04:43.482596 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-scheduler_fanout_54c851b7be5240669a4d304d9490d32a (vhost: /, messages: 0) 2026-03-24 06:04:43.482734 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-03-24 06:04:43.482886 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-03-24 06:04:43.482928 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-03-24 06:04:43.482936 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-03-24 06:04:43.483171 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-share_fanout_1b90184702fa49c59e8961697690dc97 (vhost: /, messages: 0) 2026-03-24 06:04:43.483184 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-share_fanout_7411ee022888467e899594978cba0416 (vhost: /, messages: 0) 2026-03-24 06:04:43.483246 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - manila-share_fanout_835f427557594e47860522cd7add0eca (vhost: /, messages: 0) 2026-03-24 06:04:43.483329 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-03-24 06:04:43.483339 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-03-24 06:04:43.483440 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-03-24 06:04:43.483557 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-03-24 06:04:43.483675 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-03-24 06:04:43.483881 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-03-24 06:04:43.483921 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-03-24 06:04:43.483929 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-03-24 06:04:43.484019 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.484101 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.484112 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.484282 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - octavia_provisioning_v2_fanout_6a5d381a406a43e699e578548bd8d210 (vhost: /, messages: 0) 2026-03-24 06:04:43.484295 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - octavia_provisioning_v2_fanout_6ca212c79d5847d584a8038551268ff0 (vhost: /, messages: 0) 2026-03-24 06:04:43.484415 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - octavia_provisioning_v2_fanout_d64fa38dd4074504a11d538c513bdec4 (vhost: /, messages: 0) 2026-03-24 06:04:43.484559 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer (vhost: /, messages: 0) 2026-03-24 06:04:43.484569 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.484685 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.484703 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.484842 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer_fanout_143dc826d16d4418bd5ca3666689ebb3 (vhost: /, messages: 0) 2026-03-24 06:04:43.484988 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer_fanout_42bf3521d8824b9286f1d6555d748103 (vhost: /, messages: 0) 2026-03-24 06:04:43.485383 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer_fanout_7cdbec382fe5408da0c1ebaec65ab949 (vhost: /, messages: 0) 2026-03-24 06:04:43.485444 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer_fanout_aa7a7dbb4b0c41cc8f85dc6859fe5de4 (vhost: /, messages: 0) 2026-03-24 06:04:43.485451 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer_fanout_c1b9746cf0e14c6e9b5e1d81a50d6de3 (vhost: /, messages: 0) 2026-03-24 06:04:43.485457 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - producer_fanout_d98d2469a8c74c26bebb142b437d1233 (vhost: /, messages: 0) 2026-03-24 06:04:43.485463 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-03-24 06:04:43.485469 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.485526 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.485664 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.485673 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_1455ca97e10c45d2934101c07d4a938c (vhost: /, messages: 0) 2026-03-24 06:04:43.485736 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_24f107d73f344a73a6539c2cac2f6a24 (vhost: /, messages: 0) 2026-03-24 06:04:43.485824 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_2b37bae5d86a466a8ad6fdf661781f40 (vhost: /, messages: 0) 2026-03-24 06:04:43.486222 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_2cb959f5b87d4962beed07895184df52 (vhost: /, messages: 0) 2026-03-24 06:04:43.486248 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_613abd9eb1b947728d63d0ea8ec0cc80 (vhost: /, messages: 0) 2026-03-24 06:04:43.486258 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_82964fadd4e346d5a97ab2583c973862 (vhost: /, messages: 0) 2026-03-24 06:04:43.486272 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_a1a3db6645f242aba240a5521ca7283c (vhost: /, messages: 0) 2026-03-24 06:04:43.486336 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_c6c86897397f4b8caa2c6ebbd2c7dda4 (vhost: /, messages: 0) 2026-03-24 06:04:43.486347 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-plugin_fanout_eaaf27b59786476e96b8a59416dd0c95 (vhost: /, messages: 0) 2026-03-24 06:04:43.486415 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-03-24 06:04:43.486424 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.486591 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.486677 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.486790 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_13ba8b4dc4304125876f110672aab1d4 (vhost: /, messages: 0) 2026-03-24 06:04:43.486803 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_1e9b1e249f3940b9a50225522966a5b8 (vhost: /, messages: 0) 2026-03-24 06:04:43.487190 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_26f8150331794ee8a231777453ebbe53 (vhost: /, messages: 0) 2026-03-24 06:04:43.487231 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_46d1728eae93460686a06c8225e30ea5 (vhost: /, messages: 0) 2026-03-24 06:04:43.487314 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_5cf00aff009547dfb43c17b70ea4e245 (vhost: /, messages: 0) 2026-03-24 06:04:43.487325 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_6249067b45624a8898608656e3e84182 (vhost: /, messages: 0) 2026-03-24 06:04:43.487331 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_6bbe311b8b9842d2aa0feca18d0ec401 (vhost: /, messages: 0) 2026-03-24 06:04:43.487336 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_7e8ffd1ae14747919838877348f079a1 (vhost: /, messages: 0) 2026-03-24 06:04:43.487342 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_81c8bcaffebc465a887558945c546670 (vhost: /, messages: 0) 2026-03-24 06:04:43.487601 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_837b09d1626049759b4c3d2363fb78f4 (vhost: /, messages: 0) 2026-03-24 06:04:43.487716 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_8776bc5a2eb54f5a9d173f3cb2b93e48 (vhost: /, messages: 0) 2026-03-24 06:04:43.487727 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_9a39df4c7ed749a399f965031ebb954b (vhost: /, messages: 0) 2026-03-24 06:04:43.487736 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_a70e5dbeb98648a194f3f94f8161439b (vhost: /, messages: 0) 2026-03-24 06:04:43.487748 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_afd4ae80848a4ab7a8b28b233c4ba0d5 (vhost: /, messages: 0) 2026-03-24 06:04:43.487757 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_b9208ba817dc464f96faf6eb820e13ca (vhost: /, messages: 0) 2026-03-24 06:04:43.487954 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_e16400a6b57748608226743f63b54910 (vhost: /, messages: 0) 2026-03-24 06:04:43.487968 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_e9be4c9a0de043008a1294c0668979ab (vhost: /, messages: 0) 2026-03-24 06:04:43.488152 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-reports-plugin_fanout_fe31ed643c2f4c6bb3c033887b5a9584 (vhost: /, messages: 0) 2026-03-24 06:04:43.488167 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-03-24 06:04:43.488176 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.488198 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.488351 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.488365 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_17046cd2c2f14205992b002d278fa169 (vhost: /, messages: 0) 2026-03-24 06:04:43.488463 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_2a8974c736ba4c568e1786da7ce577d7 (vhost: /, messages: 0) 2026-03-24 06:04:43.488558 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_2ea122e2d05a46469b44434df5826df3 (vhost: /, messages: 0) 2026-03-24 06:04:43.488572 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_3ee2708961ff4edeb3c6cf85e0eee230 (vhost: /, messages: 0) 2026-03-24 06:04:43.488581 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_4be022b23cde4f6f91fad694c2dcbbd9 (vhost: /, messages: 0) 2026-03-24 06:04:43.488749 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_50cf1d9f78f04584a603b462b21a886f (vhost: /, messages: 0) 2026-03-24 06:04:43.488773 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_92b83fd87cfb4b06a4179dcd99ef7fde (vhost: /, messages: 0) 2026-03-24 06:04:43.488907 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_ba0c237ed71d4cdb89042e5df6f416eb (vhost: /, messages: 0) 2026-03-24 06:04:43.488921 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - q-server-resource-versions_fanout_ddc213c194034e81bcfa049adbd1f705 (vhost: /, messages: 0) 2026-03-24 06:04:43.488931 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_1ba974cd67214e12bc60c264feb0d5cf (vhost: /, messages: 0) 2026-03-24 06:04:43.489113 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_2eed838fcfd147039986848e58109821 (vhost: /, messages: 0) 2026-03-24 06:04:43.489128 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_45906c06e7f94f86882d9de2f8621805 (vhost: /, messages: 0) 2026-03-24 06:04:43.489184 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_470dd4173be943bb8b2aa79715efa310 (vhost: /, messages: 0) 2026-03-24 06:04:43.489198 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_536db87917e84bb09c7e425c0b5cc3ea (vhost: /, messages: 0) 2026-03-24 06:04:43.489286 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_640b01180e69401fbc767d180392923f (vhost: /, messages: 0) 2026-03-24 06:04:43.489298 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_645674e9df0f46bb9d37c6b6fbd2436c (vhost: /, messages: 0) 2026-03-24 06:04:43.489435 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_73b7b266b4ce4f5f91d82af74888bc62 (vhost: /, messages: 0) 2026-03-24 06:04:43.489448 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_791c887b55bc4fff836dce7190ffe1ac (vhost: /, messages: 0) 2026-03-24 06:04:43.489626 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_a1474539a7374060b040d84041e8df4e (vhost: /, messages: 0) 2026-03-24 06:04:43.489639 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_aef94a9d8dc944ac839109e32a33f798 (vhost: /, messages: 0) 2026-03-24 06:04:43.489850 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_b1d9e3ebc69243e1bf0a0bc1dce9b205 (vhost: /, messages: 0) 2026-03-24 06:04:43.489863 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_baac5397a4604718bdec7d3dbcb58f63 (vhost: /, messages: 0) 2026-03-24 06:04:43.489873 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_c119e22e13a848e6af559bd8c62fee20 (vhost: /, messages: 0) 2026-03-24 06:04:43.490125 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_cae3094567564d8080613ab4347d93c7 (vhost: /, messages: 0) 2026-03-24 06:04:43.490144 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_d697b11139bc42ceb54d0ca77d22e577 (vhost: /, messages: 0) 2026-03-24 06:04:43.490152 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_e5c2c699a62d4956b81bdbaa83d7a07f (vhost: /, messages: 1) 2026-03-24 06:04:43.490235 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_f1e5c594ce57460c98a5895c2dcfbf7a (vhost: /, messages: 0) 2026-03-24 06:04:43.490244 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - reply_fc3d580586b54ba795ded26e211eec0a (vhost: /, messages: 0) 2026-03-24 06:04:43.490325 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-03-24 06:04:43.490345 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.490441 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.490453 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.490551 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler_fanout_345ac3177cda4e29a5f51f645e603aa3 (vhost: /, messages: 0) 2026-03-24 06:04:43.490679 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler_fanout_4cb70558f5d344ba9fe9d3010161c197 (vhost: /, messages: 0) 2026-03-24 06:04:43.490944 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler_fanout_5a2bb5a4f52f425eaf0632c375f1ec80 (vhost: /, messages: 0) 2026-03-24 06:04:43.490959 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler_fanout_a0c090b98d8746afa94fd855adb65893 (vhost: /, messages: 0) 2026-03-24 06:04:43.490968 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler_fanout_ab22a851ca674e32adeb0bc2ac37ccad (vhost: /, messages: 0) 2026-03-24 06:04:43.490977 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - scheduler_fanout_d21d568f8a3e49b5b735f97860189406 (vhost: /, messages: 0) 2026-03-24 06:04:43.491054 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker (vhost: /, messages: 0) 2026-03-24 06:04:43.491066 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-03-24 06:04:43.491151 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-03-24 06:04:43.491164 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-03-24 06:04:43.491173 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker_fanout_308b81be8f5e4f5eb7e5b731bbba4849 (vhost: /, messages: 0) 2026-03-24 06:04:43.491283 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker_fanout_53953b02f85745f2a468691d81c7177a (vhost: /, messages: 0) 2026-03-24 06:04:43.491292 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker_fanout_6f34490c25ab4a509f7848ee2c99152b (vhost: /, messages: 0) 2026-03-24 06:04:43.491374 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker_fanout_8bb7141841184b21ae167d2873f58f81 (vhost: /, messages: 0) 2026-03-24 06:04:43.491495 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker_fanout_c039f003e14a4994a105630139b7735b (vhost: /, messages: 0) 2026-03-24 06:04:43.491507 | orchestrator | 2026-03-24 06:04:43 | INFO  |  - worker_fanout_f0f42839c9584669b9a8a852467b51b0 (vhost: /, messages: 0) 2026-03-24 06:04:43.652244 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-03-24 06:04:45.538542 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-03-24 06:04:45.538641 | orchestrator | [--no-close-connections] [--quorum] 2026-03-24 06:04:45.538658 | orchestrator | [--vhost VHOST] 2026-03-24 06:04:45.538671 | orchestrator | [{list,delete,prepare,check}] 2026-03-24 06:04:45.538685 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-03-24 06:04:45.538698 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-03-24 06:04:46.141024 | orchestrator | ERROR 2026-03-24 06:04:46.141283 | orchestrator | { 2026-03-24 06:04:46.141333 | orchestrator | "delta": "2:01:59.166176", 2026-03-24 06:04:46.141358 | orchestrator | "end": "2026-03-24 06:04:45.707262", 2026-03-24 06:04:46.141381 | orchestrator | "msg": "non-zero return code", 2026-03-24 06:04:46.141457 | orchestrator | "rc": 2, 2026-03-24 06:04:46.141488 | orchestrator | "start": "2026-03-24 04:02:46.541086" 2026-03-24 06:04:46.141508 | orchestrator | } failure 2026-03-24 06:04:46.454661 | 2026-03-24 06:04:46.454927 | PLAY RECAP 2026-03-24 06:04:46.455050 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-24 06:04:46.455107 | 2026-03-24 06:04:46.708689 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-24 06:04:46.711190 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-24 06:04:47.590071 | 2026-03-24 06:04:47.590240 | PLAY [Post output play] 2026-03-24 06:04:47.607680 | 2026-03-24 06:04:47.607837 | LOOP [stage-output : Register sources] 2026-03-24 06:04:47.681196 | 2026-03-24 06:04:47.681494 | TASK [stage-output : Check sudo] 2026-03-24 06:04:48.576413 | orchestrator | sudo: a password is required 2026-03-24 06:04:48.719509 | orchestrator | ok: Runtime: 0:00:00.013284 2026-03-24 06:04:48.739699 | 2026-03-24 06:04:48.739899 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-24 06:04:48.775427 | 2026-03-24 06:04:48.775663 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-24 06:04:48.848623 | orchestrator | ok 2026-03-24 06:04:48.856192 | 2026-03-24 06:04:48.856326 | LOOP [stage-output : Ensure target folders exist] 2026-03-24 06:04:49.361735 | orchestrator | ok: "docs" 2026-03-24 06:04:49.362073 | 2026-03-24 06:04:49.678104 | orchestrator | ok: "artifacts" 2026-03-24 06:04:49.965846 | orchestrator | ok: "logs" 2026-03-24 06:04:49.985066 | 2026-03-24 06:04:49.985261 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-24 06:04:50.025049 | 2026-03-24 06:04:50.025363 | TASK [stage-output : Make all log files readable] 2026-03-24 06:04:50.357936 | orchestrator | ok 2026-03-24 06:04:50.367360 | 2026-03-24 06:04:50.367548 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-24 06:04:50.412767 | orchestrator | skipping: Conditional result was False 2026-03-24 06:04:50.429931 | 2026-03-24 06:04:50.430123 | TASK [stage-output : Discover log files for compression] 2026-03-24 06:04:50.454970 | orchestrator | skipping: Conditional result was False 2026-03-24 06:04:50.470920 | 2026-03-24 06:04:50.471142 | LOOP [stage-output : Archive everything from logs] 2026-03-24 06:04:50.515134 | 2026-03-24 06:04:50.515323 | PLAY [Post cleanup play] 2026-03-24 06:04:50.523402 | 2026-03-24 06:04:50.523506 | TASK [Set cloud fact (Zuul deployment)] 2026-03-24 06:04:50.581096 | orchestrator | ok 2026-03-24 06:04:50.592981 | 2026-03-24 06:04:50.593113 | TASK [Set cloud fact (local deployment)] 2026-03-24 06:04:50.619308 | orchestrator | skipping: Conditional result was False 2026-03-24 06:04:50.635701 | 2026-03-24 06:04:50.635853 | TASK [Clean the cloud environment] 2026-03-24 06:04:51.337812 | orchestrator | 2026-03-24 06:04:51 - clean up servers 2026-03-24 06:04:52.192553 | orchestrator | 2026-03-24 06:04:52 - testbed-manager 2026-03-24 06:04:52.293980 | orchestrator | 2026-03-24 06:04:52 - testbed-node-3 2026-03-24 06:04:52.386203 | orchestrator | 2026-03-24 06:04:52 - testbed-node-4 2026-03-24 06:04:52.478391 | orchestrator | 2026-03-24 06:04:52 - testbed-node-2 2026-03-24 06:04:52.568376 | orchestrator | 2026-03-24 06:04:52 - testbed-node-0 2026-03-24 06:04:52.679440 | orchestrator | 2026-03-24 06:04:52 - testbed-node-1 2026-03-24 06:04:52.782834 | orchestrator | 2026-03-24 06:04:52 - testbed-node-5 2026-03-24 06:04:52.868350 | orchestrator | 2026-03-24 06:04:52 - clean up keypairs 2026-03-24 06:04:52.889918 | orchestrator | 2026-03-24 06:04:52 - testbed 2026-03-24 06:04:52.919318 | orchestrator | 2026-03-24 06:04:52 - wait for servers to be gone 2026-03-24 06:05:03.780871 | orchestrator | 2026-03-24 06:05:03 - clean up ports 2026-03-24 06:05:03.984396 | orchestrator | 2026-03-24 06:05:03 - 03f7056c-fc54-4f92-85cc-3def75f97f53 2026-03-24 06:05:04.276413 | orchestrator | 2026-03-24 06:05:04 - 164530a1-bd95-4d33-b14e-290d2e2f08c9 2026-03-24 06:05:04.522154 | orchestrator | 2026-03-24 06:05:04 - 1cd17d61-8ef8-48d5-a4bf-005f38d811e9 2026-03-24 06:05:04.759456 | orchestrator | 2026-03-24 06:05:04 - 530b1ea4-7027-44de-8ce7-06f5d09c7bc9 2026-03-24 06:05:05.201263 | orchestrator | 2026-03-24 06:05:05 - 676783e2-3dcb-40a9-bdff-38b18d4eacb9 2026-03-24 06:05:05.443228 | orchestrator | 2026-03-24 06:05:05 - 8afba6ad-5f85-41bd-806a-3a88434e3eae 2026-03-24 06:05:05.661481 | orchestrator | 2026-03-24 06:05:05 - e7c95459-4ff3-450d-87a7-a3aa1efb243e 2026-03-24 06:05:05.877048 | orchestrator | 2026-03-24 06:05:05 - clean up volumes 2026-03-24 06:05:06.008688 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-2-node-base 2026-03-24 06:05:06.045667 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-4-node-base 2026-03-24 06:05:06.089002 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-5-node-base 2026-03-24 06:05:06.135884 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-3-node-base 2026-03-24 06:05:06.184744 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-0-node-base 2026-03-24 06:05:06.229526 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-manager-base 2026-03-24 06:05:06.272622 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-3-node-3 2026-03-24 06:05:06.314310 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-6-node-3 2026-03-24 06:05:06.354391 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-5-node-5 2026-03-24 06:05:06.399237 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-1-node-4 2026-03-24 06:05:06.439855 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-1-node-base 2026-03-24 06:05:06.485027 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-8-node-5 2026-03-24 06:05:06.527845 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-2-node-5 2026-03-24 06:05:06.572982 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-0-node-3 2026-03-24 06:05:06.613030 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-7-node-4 2026-03-24 06:05:06.660115 | orchestrator | 2026-03-24 06:05:06 - testbed-volume-4-node-4 2026-03-24 06:05:06.704771 | orchestrator | 2026-03-24 06:05:06 - disconnect routers 2026-03-24 06:05:06.833389 | orchestrator | 2026-03-24 06:05:06 - testbed 2026-03-24 06:05:07.926993 | orchestrator | 2026-03-24 06:05:07 - clean up subnets 2026-03-24 06:05:07.969166 | orchestrator | 2026-03-24 06:05:07 - subnet-testbed-management 2026-03-24 06:05:08.137036 | orchestrator | 2026-03-24 06:05:08 - clean up networks 2026-03-24 06:05:08.330538 | orchestrator | 2026-03-24 06:05:08 - net-testbed-management 2026-03-24 06:05:08.633007 | orchestrator | 2026-03-24 06:05:08 - clean up security groups 2026-03-24 06:05:08.668874 | orchestrator | 2026-03-24 06:05:08 - testbed-management 2026-03-24 06:05:08.819006 | orchestrator | 2026-03-24 06:05:08 - testbed-node 2026-03-24 06:05:08.938243 | orchestrator | 2026-03-24 06:05:08 - clean up floating ips 2026-03-24 06:05:08.978312 | orchestrator | 2026-03-24 06:05:08 - 81.163.192.246 2026-03-24 06:05:09.346157 | orchestrator | 2026-03-24 06:05:09 - clean up routers 2026-03-24 06:05:09.466250 | orchestrator | 2026-03-24 06:05:09 - testbed 2026-03-24 06:05:10.694927 | orchestrator | ok: Runtime: 0:00:19.311636 2026-03-24 06:05:10.699625 | 2026-03-24 06:05:10.699796 | PLAY RECAP 2026-03-24 06:05:10.699919 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-24 06:05:10.699979 | 2026-03-24 06:05:10.843945 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-24 06:05:10.846976 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-24 06:05:11.604195 | 2026-03-24 06:05:11.604386 | PLAY [Cleanup play] 2026-03-24 06:05:11.620999 | 2026-03-24 06:05:11.621152 | TASK [Set cloud fact (Zuul deployment)] 2026-03-24 06:05:11.680425 | orchestrator | ok 2026-03-24 06:05:11.690514 | 2026-03-24 06:05:11.690675 | TASK [Set cloud fact (local deployment)] 2026-03-24 06:05:11.735238 | orchestrator | skipping: Conditional result was False 2026-03-24 06:05:11.746026 | 2026-03-24 06:05:11.746166 | TASK [Clean the cloud environment] 2026-03-24 06:05:12.855068 | orchestrator | 2026-03-24 06:05:12 - clean up servers 2026-03-24 06:05:13.381608 | orchestrator | 2026-03-24 06:05:13 - clean up keypairs 2026-03-24 06:05:13.401231 | orchestrator | 2026-03-24 06:05:13 - wait for servers to be gone 2026-03-24 06:05:13.445164 | orchestrator | 2026-03-24 06:05:13 - clean up ports 2026-03-24 06:05:13.531746 | orchestrator | 2026-03-24 06:05:13 - clean up volumes 2026-03-24 06:05:13.634364 | orchestrator | 2026-03-24 06:05:13 - disconnect routers 2026-03-24 06:05:13.667165 | orchestrator | 2026-03-24 06:05:13 - clean up subnets 2026-03-24 06:05:13.690086 | orchestrator | 2026-03-24 06:05:13 - clean up networks 2026-03-24 06:05:13.870519 | orchestrator | 2026-03-24 06:05:13 - clean up security groups 2026-03-24 06:05:13.911376 | orchestrator | 2026-03-24 06:05:13 - clean up floating ips 2026-03-24 06:05:13.936723 | orchestrator | 2026-03-24 06:05:13 - clean up routers 2026-03-24 06:05:14.294134 | orchestrator | ok: Runtime: 0:00:01.473354 2026-03-24 06:05:14.298025 | 2026-03-24 06:05:14.298187 | PLAY RECAP 2026-03-24 06:05:14.298334 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-24 06:05:14.298402 | 2026-03-24 06:05:14.424365 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-24 06:05:14.426777 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-24 06:05:15.213159 | 2026-03-24 06:05:15.213350 | PLAY [Base post-fetch] 2026-03-24 06:05:15.230472 | 2026-03-24 06:05:15.230637 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-24 06:05:15.285852 | orchestrator | skipping: Conditional result was False 2026-03-24 06:05:15.297203 | 2026-03-24 06:05:15.297409 | TASK [fetch-output : Set log path for single node] 2026-03-24 06:05:15.334886 | orchestrator | ok 2026-03-24 06:05:15.343602 | 2026-03-24 06:05:15.343738 | LOOP [fetch-output : Ensure local output dirs] 2026-03-24 06:05:15.848634 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/work/logs" 2026-03-24 06:05:16.137002 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/work/artifacts" 2026-03-24 06:05:16.397487 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/03d6a5508dd54638a48ae341d1b9631e/work/docs" 2026-03-24 06:05:16.420413 | 2026-03-24 06:05:16.420572 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-24 06:05:17.367916 | orchestrator | changed: .d..t...... ./ 2026-03-24 06:05:17.368218 | orchestrator | changed: All items complete 2026-03-24 06:05:17.368268 | 2026-03-24 06:05:18.162131 | orchestrator | changed: .d..t...... ./ 2026-03-24 06:05:18.903423 | orchestrator | changed: .d..t...... ./ 2026-03-24 06:05:18.935600 | 2026-03-24 06:05:18.935768 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-24 06:05:18.966368 | orchestrator | skipping: Conditional result was False 2026-03-24 06:05:18.970956 | orchestrator | skipping: Conditional result was False 2026-03-24 06:05:18.987756 | 2026-03-24 06:05:18.987888 | PLAY RECAP 2026-03-24 06:05:18.987973 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-24 06:05:18.988015 | 2026-03-24 06:05:19.125076 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-24 06:05:19.126616 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-24 06:05:19.882259 | 2026-03-24 06:05:19.882442 | PLAY [Base post] 2026-03-24 06:05:19.897803 | 2026-03-24 06:05:19.897945 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-24 06:05:20.884132 | orchestrator | changed 2026-03-24 06:05:20.894500 | 2026-03-24 06:05:20.894649 | PLAY RECAP 2026-03-24 06:05:20.894749 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-24 06:05:20.894885 | 2026-03-24 06:05:21.023531 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-24 06:05:21.026133 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-24 06:05:21.814628 | 2026-03-24 06:05:21.814802 | PLAY [Base post-logs] 2026-03-24 06:05:21.825487 | 2026-03-24 06:05:21.825624 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-24 06:05:22.310042 | localhost | changed 2026-03-24 06:05:22.329131 | 2026-03-24 06:05:22.329390 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-24 06:05:22.368758 | localhost | ok 2026-03-24 06:05:22.374927 | 2026-03-24 06:05:22.375074 | TASK [Set zuul-log-path fact] 2026-03-24 06:05:22.393067 | localhost | ok 2026-03-24 06:05:22.406734 | 2026-03-24 06:05:22.406898 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-24 06:05:22.444766 | localhost | ok 2026-03-24 06:05:22.451139 | 2026-03-24 06:05:22.451377 | TASK [upload-logs : Create log directories] 2026-03-24 06:05:22.947524 | localhost | changed 2026-03-24 06:05:22.951796 | 2026-03-24 06:05:22.951946 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-24 06:05:23.485545 | localhost -> localhost | ok: Runtime: 0:00:00.007514 2026-03-24 06:05:23.492106 | 2026-03-24 06:05:23.492333 | TASK [upload-logs : Upload logs to log server] 2026-03-24 06:05:24.067675 | localhost | Output suppressed because no_log was given 2026-03-24 06:05:24.070820 | 2026-03-24 06:05:24.071054 | LOOP [upload-logs : Compress console log and json output] 2026-03-24 06:05:24.132686 | localhost | skipping: Conditional result was False 2026-03-24 06:05:24.140702 | localhost | skipping: Conditional result was False 2026-03-24 06:05:24.153572 | 2026-03-24 06:05:24.153794 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-24 06:05:24.216018 | localhost | skipping: Conditional result was False 2026-03-24 06:05:24.216730 | 2026-03-24 06:05:24.219792 | localhost | skipping: Conditional result was False 2026-03-24 06:05:24.227380 | 2026-03-24 06:05:24.227632 | LOOP [upload-logs : Upload console log and json output]